Risk from Oklahoma’s Induced Earthquakes: The Cost of Declustering

Author(s):  
Jeremy Maurer ◽  
Deborah Kane ◽  
Marleen Nyst ◽  
Jessica Velasquez

ABSTRACT The U.S. Geological Survey (USGS) has for each year 2016–2018 released a one-year seismic hazard map for the central and eastern United States (CEUS) to address the problem of induced and triggered seismicity (ITS) in the region. ITS in areas with historically low rates of earthquakes provides both challenges and opportunities to learn about crustal conditions, but few scientific studies have considered the financial risk implications of damage caused by ITS. We directly address this issue by modeling earthquake risk in the CEUS using the 1 yr hazard model from the USGS and the RiskLink software package developed by Risk Management Solutions, Inc. We explore the sensitivity of risk to declustering and b-value, and consider whether declustering methods developed for tectonic earthquakes are suitable for ITS. In particular, the Gardner and Knopoff (1974) declustering algorithm has been used in every USGS hazard forecast, including the recent 1 yr forecasts, but leads to the counterintuitive result that earthquake risk in Oklahoma is at its highest level in 2018, even though there were one-fifth as many earthquakes as occurred in 2016. Our analysis shows that this is a result of (1) the peculiar characteristics of the declustering algorithm with space-varying and time-varying seismicity rates, (2) the fact that the frequency–magnitude distribution of earthquakes in Oklahoma is not well described by a single b-value, and (3) at later times, seismicity is more spatially diffuse and seismicity rate increases are closer to more populated areas. ITS in Oklahoma may include a combination of swarm-like events with tectonic-style events, which have different frequency–magnitude and aftershock distributions. New algorithms for hazard estimation need to be developed to account for these unique characteristics of ITS.

Author(s):  
Alireza Babaie Mahani

Critical analysis of induced earthquake occurrences requires comprehensive datasets obtained by dense seismographic networks. In this study, using such datasets, I take a detailed investigation into induced seismicity that occurred in the Montney play of northeast British Columbia, mostly caused by hydraulic fracturing. The frequency-magnitude distribution (FMD) of earthquakes in several temporal and spatial clusters, show fundamental discrepancies between seismicity in the southern Montney play (2014-2018) and the northern area (2014-2016). In both regions, FMDs follow the linear Gutenberg-Richter (G-R) relationship for magnitudes up to 2-3. While in the southern Montney, within the Fort St. John graben complex, the number of earthquakes at larger magnitudes falls off rapidly below the G-R line, within the northern area with a dominant compressional regime, the number of events increases above the G-R line. This systematic difference may have important implications with regard to seismic hazard assessments from induced seismicity in the two regions, although caution in the interpretation is warranted due to local variabilities. While for most clusters within the southern Montney area, the linear or truncated G-R relationship provide reliable seismicity rates for events below magnitude 4, the G-R relationship underestimates the seismicity rate for magnitudes above 3 in northern Montney. Using a well-located dataset of earthquakes in southern Montney, one can observe generally that 1) seismic productivity correlates well with the injected volume during hydraulic fracturing and 2) there is a clear depth dependence for the G-R b-value; clusters with deeper median depths show lower b-values than those with shallower depths.


2014 ◽  
Vol 08 (04) ◽  
pp. 1450010 ◽  
Author(s):  
Santi Pailoplee

In this study, the geospatial frequency–magnitude distribution (FMD) b-value images of the prospect sources of upcoming earthquakes were investigated along the Indonesian Sunda Margin (ISM) that strikes parallel to and near the Indonesian Island chain. After enhancing the completeness and stability of the earthquake catalogue, the seismicity data were separated according to their seismotectonic setting into shallow crustal and Intraslab earthquakes. In order to verify the spatial relationship between the b-values and the occurrence of subsequent major earthquakes, the complete shallow crustal seismicity dataset (1980–2005) was truncated into the 1980–2000 sub-dataset. Utilizing the suitable assumption of fixed-number of earthquakes, retrospective tests of both the complete and truncated datasets supported that areas of comparatively low b-values could reasonably be expected to predict likely hypocenters of future earthquakes. As a result, the present-day distributions of b-values derived from the complete (1980–2005) shallow crustal and Intraslab seismicity datasets revealed eight and six earthquake-prone areas, respectively, along the ISM. Since most of these high risk areas proposed here are quite close to the major cities of Indonesia, attention should be paid and mitigation plans should be developed for both seismic and tsunami hazards.


2021 ◽  
Author(s):  
Rodrigo Estay ◽  
Claudia Pavez

<p>The Gutenberg – Richter’s b-value is commonly used to analyze the frequency-magnitude distribution of earthquakes, describing the proportion of small and large seismic events as the first estimation of seismic hazard. Additionally, the b-value has been used as a stress meter, giving some insights into the stress regime in different regions around the world. In this research, a grid-based spatial distribution for the b – value was estimated in three different areas of Norway: northern (74°-81° N/ 12°-26° E), southern (57°-64°N/3°-12° E), and the ridge zones of Mohns and Knipovich. For this, we used a complete catalog from the years 2000 to 2019, which was obtained from the Norwegian National Seismic Network online database. The magnitude of completeness was estimated separately for each zone both in time and space, covering a total area of ~425,000 km<sup>2</sup>. Our results show a regional variation of the mean b-value for northern (b<sub>north</sub> = 0.79) and southern (b<sub>south</sub> = 1.03) Norway, and the Ridge (b<sub>ridge</sub> = 0.73), which can be interpreted in terms of the predominant stress regime in the different zones. So far, a few calculations regarding the b-value were previously done in Norway to analyze local intraplate sequences. Then, according to our knowledge, this research corresponds to the first estimation of a regional spatial variation of the b – value in the country.</p>


1995 ◽  
Vol 85 (6) ◽  
pp. 1858-1866
Author(s):  
F. Ramón Zúñiga ◽  
Max Wyss

Abstract A simple procedure is presented for analyzing magnitudes and seismicity rates reported in earthquake catalogs in order to discriminate between inadvertently introduced changes in magnitude and real seismicity changes. We assume that the rate and the frequency-magnitude relation of the independent background seismicity do not change with time. Observed differences in the frequency-magnitude relation (a and b values) between data from two periods are modeled as due to a transformation of the magnitude scale. The transformation equation is found by a least-squares-fitting process based on the seismicity data for earthquakes large enough to be reported completely and by comparing the linear relation of one period to the other. For smaller events, an additional factor accounting for increased (decreased) detection is allowed. This fitting technique is tested on a data set from Parkfield for which two types of magnitudes, amplitude and duration, were computed for each earthquake. We found that the b-value fitting technique yielded virtually the same result as a linear regression assuming the same errors in the two magnitudes. The technique is also applied to interpret the nature of reporting rate changes in a local (Guerrero, Mexico) and a regional (Italy) earthquake catalog. In Guerrero, a magnitude change in 1991.37 can be modeled about equally well by Mnew = Mold + 0.5 or by Mnew = 1.02 Mold + 0.38, but residuals with the latter transformation are smaller. In Italy, a magnitude change in 1980.21 cannot be modeled satisfactorily by a simple magnitude shift but is well described by a compression of the magnitude scale given by Mnew = 0.67 Mold + 1.03. The proposed b-slope fitting method provides a means to interpret quantitatively, and in some cases correct for, artificial reporting rate changes in earthquake catalogs.


Author(s):  
F. R. Zuniga ◽  
M. Reyners ◽  
P. Villamor

The authors have analyzed the main temporal characteristics of the earthquake data in the catalogue of seismicity of New Zealand with the objective of providing a general overview of its content and limitations. To this end we have employed different statistical tools which allow for the objective estimate of times of changes in the seismicity rates as well as providing the main reasons for those changes. We have found that the seismicity record of the largest events (M > 4.0) is bracketed by significant changes which occurred during 1940, 1965-1968, and 1987 with other less significant changes taking place during 1960, 1983 and 1992. By comparing the rates obtained for intervals bounded by the aforementioned dates we were able to determine that a linear correction to the magnitudes in the period 1968 to 1987 for the whole depth range of events may be useful in order to match the frequency-magnitude distribution which is obtained using the current data in the interval 1987 - 2004. A different pattern emerged when separating shallow from intermediate and deep events and we found that most of the temporal variations observed affected mainly the deep events (Z > 40 km), and that the reported rate of shallow seismicity for M > 4.0 has been remarkably homogeneous since 1940. This observation is supported not only by the constancy of seismicity rates but by the similarity of frequency-magnitude distributions during the time intervals analyzed. A systematic evaluation of minimum magnitude of completeness for shallow seismicity yielded values of Mc = 4.4 for the interval 1940 to 1968, Mc = 3.9 for the interval 1968 to 1987 and Mc = 2.6 for the current stage from 1987 to 2004. A simple magnitude shift of 0.2 units applied to data in the interval 1968 to 1987 for intermediate and deep events was found to provide a good match to the observed rate for events from 1987 to 2004. The evaluation of Mc for events in the 40 ≤ z ≤ 600 km range yielded values of 5.5 for data in the interval 1940 to 1968, 4.0 for 1968 to 1987, and 3.6 for the current operative practice starting in I 987. A detailed investigation of the variation of the magnitude of completeness with time was carried out resulting in different trends which seem to correspond to the time when major changes were carried out in the network, in particular with the installation and removal of local microcarthquake networks.


Author(s):  
Leila Mizrahi ◽  
Shyam Nandan ◽  
Stefan Wiemer

Abstract Declustering aims to divide earthquake catalogs into independent events (mainshocks), and dependent (clustered) events, and is an integral component of many seismicity studies, including seismic hazard assessment. We assess the effect of declustering on the frequency–magnitude distribution of mainshocks. In particular, we examine the dependence of the b-value of declustered catalogs on the choice of declustering approach and algorithm-specific parameters. Using the catalog of earthquakes in California since 1980, we show that the b-value decreases by up to 30% due to declustering with respect to the undeclustered catalog. The extent of the reduction is highly dependent on the declustering method and parameters applied. We then reproduce a similar effect by declustering synthetic earthquake catalogs with known b-value, which have been generated using an epidemic-type aftershock sequence model. Our analysis suggests that the observed decrease in b-value must, at least partially, arise from the application of the declustering algorithm on the catalog, rather than from differences in the nature of mainshocks versus fore- or aftershocks. We conclude that declustering should be considered as a potential source of bias in seismicity and hazard studies.


Sign in / Sign up

Export Citation Format

Share Document