A hybrid regularization scheme for the inversion of magnetotelluric data from natural and controlled sources to layer and distortion parameters

Geophysics ◽  
2012 ◽  
Vol 77 (4) ◽  
pp. E301-E315 ◽  
Author(s):  
Thomas Kalscheuer ◽  
Juliane Hübert ◽  
Alexey Kuvshinov ◽  
Tobias Lochbühler ◽  
Laust B. Pedersen

Magnetotelluric (MT), radiomagnetotelluric (RMT), and, in particular, controlled-source audiomagnetotelluric (CSAMT) data are often heavily distorted by near-surface inhomogeneities. We developed a novel scheme to invert MT, RMT, and CSAMT data in the form of scalar or tensorial impedances and vertical magnetic transfer functions simultaneously for layer resistivities and electric and magnetic galvanic distortion parameters. The inversion scheme uses smoothness constraints to regularize layer resistivities and either Marquardt-Levenberg damping or the minimum-solution length criterion to regularize distortion parameters. A depth of investigation range is estimated by comparing layered model sections derived from first- and second-order smoothness constraints. Synthetic examples demonstrate that earth models are reconstructed properly for distorted and undistorted tensorial CSAMT data. In the inversion of scalar CSAMT data, such as the determinant impedance or individual tensor elements, the reduced number of transfer functions inevitably leads to increased ambiguity for distortion parameters. As a consequence of this ambiguity for scalar data, distortion parameters often grow over the iterations to unrealistic absolute values when regularized with the Marquardt-Levenberg scheme. Essentially, compensating relationships between terms containing electric and/or magnetic distortion are used in this growth. In a regularization with the minimum solution length criterion, the distortion parameters converge into a stable configuration after several iterations and attain reasonable values. The inversion algorithm was applied to a CSAMT field data set collected along a profile over a tunnel construction site at Hallandsåsen, Sweden. To avoid erroneous inverse models from strong anthropogenic effects on the data, two scalar transfer functions (one scalar impedance and one scalar vertical magnetic transfer function) were selected for inversion. Compared with a regularization of distortion parameters with the Marquardt-Levenberg method, the minimum-solution length criterion yielded smaller absolute values of distortion parameters and a horizontally more homogeneous distribution of electrical conductivity.

Geophysics ◽  
1992 ◽  
Vol 57 (11) ◽  
pp. 1482-1492 ◽  
Author(s):  
James L. Simmons ◽  
Milo M. Backus

A linearized tomographic‐inversion algorithm estimates the near‐surface slowness anomalies present in a conventional, shallow‐marine seismic reflection data set. First‐arrival time residuals are the data to be inverted. The anomalies are treated as perturbations relative to a known, laterally‐invariant reference velocity model. Below the sea floor the reference model varies smoothly with depth; consequently the first arrivals are considered to be diving waves. In the offset‐midpoint domain the geometric patterns of traveltime perturbations produced by the anomalies resemble hyperbolas. Based on simple ray theory, these geometric patterns are predictable and can be used to relate the unknown model to the data. The assumption of a laterally‐invariant reference model permits an efficient solution in the offset‐wavenumber domain which is obtained in a single step using conventional least squares. The tomographic image shows the vertical‐traveltime perturbations associated with the anomalies as a function of midpoint at a number of depths. As implemented, the inverse problem is inherently stable. The first arrivals sample the subsurface to a maximum depth of roughly 500 m (≈ one‐fifth of the spread length). The model is parameterized to consist of fifteen 20-m thick layers spanning a depth range of 80–380 m. One‐way vertical‐traveltime delays as large as 10 ms are estimated. Assuming that these time delays are distributed over the entire 20-m thick layers, velocities much slower than water velocity are implied for the anomalies. Maps of the tomographic images show the spatial location and orientation of the anomalies throughout the prospect for the upper 400 m. Each line is processed independently, and the results are corroborated to a high degree at the line intersections.


Geophysics ◽  
2001 ◽  
Vol 66 (1) ◽  
pp. 158-173 ◽  
Author(s):  
Gary W. McNeice ◽  
Alan G. Jones

Accurate interpretation of magnetotelluric data requires an understanding of the directionality and dimensionality inherent in the data, and valid implementation of an appropriate method for removing the effects of shallow, small‐scale galvanic scatterers on the data to yield responses representative of regional‐scale structures. The galvanic distortion analysis approach advocated by Groom and Bailey has become the most adopted method, rightly so given that the approach decomposes the magnetotelluric impedance tensor into determinable and indeterminable parts, and tests statistically the validity of the galvanic distortion assumption. As proposed by Groom and Bailey, one must determine the appropriate frequency‐independent telluric distortion parameters and geoelectric strike by fitting the seven‐parameter model on a frequency‐by‐frequency and site‐by‐site basis independently. Although this approach has the attraction that one gains a more intimate understanding of the data set, it is rather time‐consuming and requires repetitive application. We propose an extension to Groom‐Bailey decomposition in which a global minimum is sought to determine the most appropriate strike direction and telluric distortion parameters for a range of frequencies and a set of sites. Also, we show how an analytically‐derived approximate Hessian of the objective function can reduce the required computing time. We illustrate application of the analysis to two synthetic data sets and to real data. Finally, we show how the analysis can be extended to cover the case of frequency‐dependent distortion caused by the magnetic effects of the galvanic charges.


2020 ◽  
Vol 222 (3) ◽  
pp. 1620-1638 ◽  
Author(s):  
M Moorkamp ◽  
A Avdeeva ◽  
Ahmet T Basokur ◽  
Erhan Erdogan

SUMMARY Galvanic distortion of magnetotelluric (MT) data is a common effect that can impede the reliable imaging of subsurface structures. Recently, we presented an inversion approach that includes a mathematical description of the effect of galvanic distortion as inversion parameters and demonstrated its efficiency with real data. We now systematically investigate the stability of this inversion approach with respect to different inversion strategies, starting models and model parametrizations. We utilize a data set of 310 MT sites that were acquired for geothermal exploration. In addition to impedance tensor estimates over a broad frequency range, the data set also comprises transient electromagnetic measurements to determine near surface conductivity and estimates of distortion at each site. We therefore can compare our inversion approach to these distortion estimates and the resulting inversion models. Our experiments show that inversion with distortion correction produces stable results for various inversion strategies and for different starting models. Compared to inversions without distortion correction, we can reproduce the observed data better and reduce subsurface artefacts. In contrast, shifting the impedance curves at high frequencies to match the transient electromagnetic measurements reduces the misfit of the starting model, but does not have a strong impact on the final results. Thus our results suggest that including a description of distortion in the inversion is more efficient and should become a standard approach for MT inversion.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


2015 ◽  
Vol 8 (2) ◽  
pp. 941-963 ◽  
Author(s):  
T. Vlemmix ◽  
F. Hendrick ◽  
G. Pinardi ◽  
I. De Smedt ◽  
C. Fayt ◽  
...  

Abstract. A 4-year data set of MAX-DOAS observations in the Beijing area (2008–2012) is analysed with a focus on NO2, HCHO and aerosols. Two very different retrieval methods are applied. Method A describes the tropospheric profile with 13 layers and makes use of the optimal estimation method. Method B uses 2–4 parameters to describe the tropospheric profile and an inversion based on a least-squares fit. For each constituent (NO2, HCHO and aerosols) the retrieval outcomes are compared in terms of tropospheric column densities, surface concentrations and "characteristic profile heights" (i.e. the height below which 75% of the vertically integrated tropospheric column density resides). We find best agreement between the two methods for tropospheric NO2 column densities, with a standard deviation of relative differences below 10%, a correlation of 0.99 and a linear regression with a slope of 1.03. For tropospheric HCHO column densities we find a similar slope, but also a systematic bias of almost 10% which is likely related to differences in profile height. Aerosol optical depths (AODs) retrieved with method B are 20% high compared to method A. They are more in agreement with AERONET measurements, which are on average only 5% lower, however with considerable relative differences (standard deviation ~ 25%). With respect to near-surface volume mixing ratios and aerosol extinction we find considerably larger relative differences: 10 ± 30, −23 ± 28 and −8 ± 33% for aerosols, HCHO and NO2 respectively. The frequency distributions of these near-surface concentrations show however a quite good agreement, and this indicates that near-surface concentrations derived from MAX-DOAS are certainly useful in a climatological sense. A major difference between the two methods is the dynamic range of retrieved characteristic profile heights which is larger for method B than for method A. This effect is most pronounced for HCHO, where retrieved profile shapes with method A are very close to the a priori, and moderate for NO2 and aerosol extinction which on average show quite good agreement for characteristic profile heights below 1.5 km. One of the main advantages of method A is the stability, even under suboptimal conditions (e.g. in the presence of clouds). Method B is generally more unstable and this explains probably a substantial part of the quite large relative differences between the two methods. However, despite a relatively low precision for individual profile retrievals it appears as if seasonally averaged profile heights retrieved with method B are less biased towards a priori assumptions than those retrieved with method A. This gives confidence in the result obtained with method B, namely that aerosol extinction profiles tend on average to be higher than NO2 profiles in spring and summer, whereas they seem on average to be of the same height in winter, a result which is especially relevant in relation to the validation of satellite retrievals.


2014 ◽  
Vol 644-650 ◽  
pp. 2670-2673
Author(s):  
Jun Wang ◽  
Xiao Hong Meng ◽  
Fang Li ◽  
Jun Jie Zhou

With the continuing growth in influence of near surface geophysics, the research of the subsurface structure is of great significance. Geophysical imaging is one of the efficient computer tools that can be applied. This paper utilize the inversion of potential field data to do the subsurface imaging. Here, gravity data and magnetic data are inverted together with structural coupled inversion algorithm. The subspace (model space) is divided into a set of rectangular cells by an orthogonal 2D mesh and assume a constant property (density and magnetic susceptibility) value within each cell. The inversion matrix equation is solved as an unconstrained optimization problem with conjugate gradient method (CG). This imaging method is applied to synthetic data for typical models of gravity and magnetic anomalies and is tested on field data.


2021 ◽  
Author(s):  
Riccardo Scandroglio ◽  
Till Rehm ◽  
Jonas K. Limbrock ◽  
Andreas Kemna ◽  
Markus Heinze ◽  
...  

<p>The warming of alpine bedrock permafrost in the last three decades and consequent reduction of frozen areas has been well documented. Its consequences like slope stability reduction put humans and infrastructures at high risk. 2020 in particular was the warmest year on record at 3000m a.s.l. embedded in the warmest decade.</p><p>Recently, the development of electrical resistivity tomography (ERT) as standard technique for quantitative permafrost investigation allows extended monitoring of this hazard even allowing including quantitative 4D monitoring strategies (Scandroglio et al., in review). Nevertheless thermo-hydro-mechanical dynamics of steep bedrock slopes cannot be totally explained by a single measurement technique and therefore multi-approach setups are necessary in the field to record external forcing and improve the deciphering of internal responses.</p><p>The Zugspitze Kammstollen is a 850m long tunnel located between 2660 and 2780m a.s.l., a few decameters under the mountain ridge. First ERT monitoring was conducted in 2007 (Krautblatter et al., 2010) and has been followed by more than one decade of intensive field work. This has led to the collection of a unique multi-approach data set of still unpublished data. Continuous logging of environmental parameters such as rock/air temperatures and water infiltration through joints as well as a dedicated thermal model (Schröder and Krautblatter, in review) provide important additional knowledge on bedrock internal dynamics. Summer ERT and seismic refraction tomography surveys with manual and automated joints’ displacement measurements on the ridge offer information on external controls, complemented by three weather stations and a 44m long borehole within 1km from the tunnel.</p><p>Year-round access to the area enables uninterrupted monitoring and maintenance of instruments for reliable data collection. “Precisely controlled natural conditions”, restricted access for researchers only and logistical support by Environmental Research Station Schneefernerhaus, make this tunnel particularly attractive for developing benchmark experiments. Some examples are the design of induced polarization monitoring, the analysis of tunnel spring water for isotopes investigation, and the multi-annual mass monitoring by means of relative gravimetry.</p><p>Here, we present the recently modernized layout of the outdoor laboratory with the latest monitoring results, opening a discussion on further possible approaches of this extensive multi-approach data set, aiming at understanding not only permafrost thermal evolution but also the connected thermo-hydro-mechanical processes.</p><p> </p><p> </p><p>Krautblatter, M. et al. (2010) ‘Temperature-calibrated imaging of seasonal changes in permafrost rock walls by quantitative electrical resistivity tomography (Zugspitze, German/Austrian Alps)’, Journal of Geophysical Research: Earth Surface, 115(2), pp. 1–15. doi: 10.1029/2008JF001209.</p><p>Scandroglio, R. et al. (in review) ‘4D-Quantification of alpine permafrost degradation in steep rock walls using a laboratory-calibrated ERT approach (in review)’, Near Surface Geophysics.</p><p>Schröder, T. and Krautblatter, M. (in review) ‘A high-resolution multi-phase thermo-geophysical model to verify long-term electrical resistivity tomography monitoring in alpine permafrost rock walls (Zugspitze, German/Austrian Alps) (submitted)’, Earth Surface Processes and Landforms.</p>


Author(s):  
Rohit Shankaran ◽  
Alexander Rimmer ◽  
Alan Haig

In recent years due to use of drilling risers with larger and heavier BOP/LMRP stacks, fatigue loading on subsea wellheads has increased, which poses potential restrictions on the duration of drilling operations. In order to track wellhead and conductor fatigue capacity consumption to support safe drilling operations a range of methods have been applied: • Analytical riser model and measured environmental data; • BOP motion measurement and transfer functions; • Strain gauge data. Strain gauge monitoring is considered the most accurate method for measuring fatigue capacity consumption. To compare the three approaches and establish recommendations for an optimal approach and method to establish fatigue accumulation of the wellhead, a monitoring data set is obtained on a well offshore West of Shetland. This paper presents an analysis of measured strain, motions and analytical predictions with the objective of better understanding the accuracy, limitations, or conservatism in each of the three methods defined above. Of the various parameters that affect the accuracy of the fatigue damage estimates, the paper identifies that the selection of analytical conductor-soil model is critical to narrowing the gap between fatigue life predictions from the different approaches. The work presented here presents the influence of alternative approaches to model conductor-soil interaction than the traditionally used API soil model. Overall, the paper presents the monitoring equipment and analytical methodology to advance the accuracy of wellhead fatigue damage measurements.


2020 ◽  
Vol 223 (2) ◽  
pp. 1378-1397
Author(s):  
Rosemary A Renaut ◽  
Jarom D Hogue ◽  
Saeed Vatankhah ◽  
Shuang Liu

SUMMARY We discuss the focusing inversion of potential field data for the recovery of sparse subsurface structures from surface measurement data on a uniform grid. For the uniform grid, the model sensitivity matrices have a block Toeplitz Toeplitz block structure for each block of columns related to a fixed depth layer of the subsurface. Then, all forward operations with the sensitivity matrix, or its transpose, are performed using the 2-D fast Fourier transform. Simulations are provided to show that the implementation of the focusing inversion algorithm using the fast Fourier transform is efficient, and that the algorithm can be realized on standard desktop computers with sufficient memory for storage of volumes up to size n ≈ 106. The linear systems of equations arising in the focusing inversion algorithm are solved using either Golub–Kahan bidiagonalization or randomized singular value decomposition algorithms. These two algorithms are contrasted for their efficiency when used to solve large-scale problems with respect to the sizes of the projected subspaces adopted for the solutions of the linear systems. The results confirm earlier studies that the randomized algorithms are to be preferred for the inversion of gravity data, and for data sets of size m it is sufficient to use projected spaces of size approximately m/8. For the inversion of magnetic data sets, we show that it is more efficient to use the Golub–Kahan bidiagonalization, and that it is again sufficient to use projected spaces of size approximately m/8. Simulations support the presented conclusions and are verified for the inversion of a magnetic data set obtained over the Wuskwatim Lake region in Manitoba, Canada.


2020 ◽  
Author(s):  
Yee Jun Tham ◽  
Nina Sarnela ◽  
Carlos A. Cuevas ◽  
Iyer Siddharth ◽  
Lisa Beck ◽  
...  

<p>Atmospheric halogens chemistry like the catalytic reaction of bromine and chlorine radicals with ozone (O<sub>3</sub>) has been known to cause the springtime surface-ozone destruction in the polar region. Although the initial atmospheric reactions of chlorine with ozone are well understood, the final oxidation steps leading to the formation of chlorate (ClO<sub>3</sub><sup>-</sup>) and perchlorate (ClO<sub>4</sub><sup>-</sup>) remain unclear due to the lack of direct evidence of their presence and fate in the atmosphere. In this study, we present the first high-resolution ambient data set of gas-phase HClO<sub>3</sub> (chloric acid) and HClO<sub>4</sub> (perchlorate acid) obtained from the field measurement at the Villum Research Station, Station Nord, in high arctic North Greenland (81°36’ N, 16°40’ W) during the spring of 2015. A state-of-the-art chemical ionization atmospheric pressure interface time-of-flight mass spectrometer (CI-APi-TOF) was used in negative ion mode with nitrate ion as the reagent ion to detect the gas-phase HClO<sub>3</sub> and HClO<sub>4</sub>. We measured significant level of HClO<sub>3</sub> and HClO<sub>4</sub> only during the springtime ozone depletion events in the Greenland, with concentration up to 9x10<sup>5</sup> molecule cm<sup>-3</sup>. Air mass trajectory analysis shows that the air during the ozone depletion event was confined to near-surface, indicating that the O<sub>3</sub> and surface of sea-ice/snowpack may play important roles in the formation of HClO<sub>3</sub> and HClO<sub>4</sub>. We used high-level quantum-chemical methods to calculate the ultraviolet-visible absorption spectra and cross-section of HClO<sub>3</sub> and HClO<sub>4</sub> in the gas-phase to assess their fates in the atmosphere. Overall, our results reveal the presence of HClO<sub>3</sub> and HClO<sub>4</sub> during ozone depletion events, which could affect the chlorine chemistry in the Arctic atmosphere.</p>


Sign in / Sign up

Export Citation Format

Share Document