The use of wavelet transforms for improved interpretation of airborne transient electromagnetic data

Geophysics ◽  
2013 ◽  
Vol 78 (3) ◽  
pp. E117-E123 ◽  
Author(s):  
Vanessa Nenna ◽  
Adam Pidlisecky

The continuous wavelet transform (CWT) is used to create maps of dominant spatial scales in airborne transient electromagnetic (ATEM) data sets to identify cultural noise and topographic features. The introduced approach is applied directly to ATEM data, and does not require the measurements be inverted, though it can easily be applied to an inverted model. For this survey, we apply the CWT spatially to B-field and dB/dt ATEM data collected in the Edmonton-Calgary Corridor of southern Alberta. The average wavelet power is binned over four ranges of spatial scale and converted to 2D maps of normalized power within each bin. The analysis of approximately 2 million soundings that make up the survey can be run on the order of minutes on a 2.4 GHz Intel processor. We perform the same CWT analysis on maps of surface and bedrock topography and also compare ATEM results to maps of infrastructure in the region. We find that linear features identified on power maps that differ significantly between B-field and dB/dt data are well correlated with a high density of infrastructure. Features that are well correlated with topography tend to be consistent in power maps for both types of data. For this data set, use of the CWT reveals that topographic features and cultural noise from high-pressure oil and gas pipelines affect a significant portion of the survey region. The identification of cultural noise and surface features in the raw ATEM data through CWT analysis provides a means of focusing and speeding processing prior to inversion, though the magnitude of this affect on ATEM signals is not assessed.

2010 ◽  
Vol 2010 ◽  
pp. 1-14 ◽  
Author(s):  
Stefan Polanski ◽  
Annette Rinke ◽  
Klaus Dethloff

The regional climate model HIRHAM has been applied over the Asian continent to simulate the Indian monsoon circulation under present-day conditions. The model is driven at the lateral and lower boundaries by European reanalysis (ERA40) data for the period from 1958 to 2001. Simulations with a horizontal resolution of 50 km are carried out to analyze the regional monsoon patterns. The focus in this paper is on the validation of the long-term summer monsoon climatology and its variability concerning circulation, temperature, and precipitation. Additionally, the monsoonal behavior in simulations for wet and dry years has been investigated and compared against several observational data sets. The results successfully reproduce the observations due to a realistic reproduction of topographic features. The simulated precipitation shows a better agreement with a high-resolution gridded precipitation data set over the central land areas of India and in the higher elevated Tibetan and Himalayan regions than ERA40.


SPE Journal ◽  
2017 ◽  
Vol 23 (03) ◽  
pp. 719-736 ◽  
Author(s):  
Quan Cai ◽  
Wei Yu ◽  
Hwa Chi Liang ◽  
Jenn-Tai Liang ◽  
Suojin Wang ◽  
...  

Summary The oil-and-gas industry is entering an era of “big data” because of the huge number of wells drilled with the rapid development of unconventional oil-and-gas reservoirs during the past decade. The massive amount of data generated presents a great opportunity for the industry to use data-analysis tools to help make informed decisions. The main challenge is the lack of the application of effective and efficient data-analysis tools to analyze and extract useful information for the decision-making process from the enormous amount of data available. In developing tight shale reservoirs, it is critical to have an optimal drilling strategy, thereby minimizing the risk of drilling in areas that would result in low-yield wells. The objective of this study is to develop an effective data-analysis tool capable of dealing with big and complicated data sets to identify hot zones in tight shale reservoirs with the potential to yield highly productive wells. The proposed tool is developed on the basis of nonparametric smoothing models, which are superior to the traditional multiple-linear-regression (MLR) models in both the predictive power and the ability to deal with nonlinear, higher-order variable interactions. This data-analysis tool is capable of handling one response variable and multiple predictor variables. To validate our tool, we used two real data sets—one with 249 tight oil horizontal wells from the Middle Bakken and the other with 2,064 shale gas horizontal wells from the Marcellus Shale. Results from the two case studies revealed that our tool not only can achieve much better predictive power than the traditional MLR models on identifying hot zones in the tight shale reservoirs but also can provide guidance on developing the optimal drilling and completion strategies (e.g., well length and depth, amount of proppant and water injected). By comparing results from the two data sets, we found that our tool can achieve model performance with the big data set (2,064 Marcellus wells) with only four predictor variables that is similar to that with the small data set (249 Bakken wells) with six predictor variables. This implies that, for big data sets, even with a limited number of available predictor variables, our tool can still be very effective in identifying hot zones that would yield highly productive wells. The data sets that we have access to in this study contain very limited completion, geological, and petrophysical information. Results from this study clearly demonstrated that the data-analysis tool is certainly powerful and flexible enough to take advantage of any additional engineering and geology data to allow the operators to gain insights on the impact of these factors on well performance.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Atila Ozguc ◽  
Ali Kilcik ◽  
Volkan Sarp ◽  
Hülya Yeşilyaprak ◽  
Rıza Pektaş

In this study, we used the flare index (FI) data taken from Kandilli Observatory for the period of 2009–2020. The data sets are analyzed in three categories as Northern Hemisphere, Southern Hemisphere, and total FI data sets. Total FI data set is obtained from the sum of Northern and Southern Hemispheric values. In this study, the periodic variations of abovementioned three categories FI data sets were investigated by using the MTM and Morlet wavelet analysis methods. The wavelet coherence (XWT) and cross wavelet (WTC) analysis methods were also performed between these data sets. As a result of our analysis, the following results were found: (1) long- and short-term periodicities ( 2048 ± 512 day and periodicities smaller than 62 days) exist in all data sets without any exception at least with 95 % confidence level; (2) all periodic variations were detected maximum during the solar cycle, while during the minima, no meaningful period is detected; (3) some periodicities have data preference that about 150 days Rieger period appears only in the whole data set and 682-, 204-, and 76.6-day periods appear only in the Northern Hemisphere data sets; (4) During the Solar Cycle 24, more flare activity is seen at the Southern Hemisphere, so the whole disk data periodicities are dominated by this hemisphere; (5) in general, there is a phase mixing between Northern and Southern Hemisphere FI data, except about 1024-day periodicity, and the best phase coherency is obtained between the Southern Hemisphere and total flare index data sets; (6) in case of the Northern and Southern Hemisphere FI data sets, there is no significant correlation between two continuous wavelet transforms, but the strongest correlation is obtained for the total FI and Southern Hemisphere data sets.


2021 ◽  
Author(s):  
Tobias Sieg ◽  
Annegret Thieken

<p><span>The management of risks arising from natural hazards requires a reliable estimation of the hazards’ impact on exposed objects. The data sets used for this estimation have improved during the recent years reflecting an increasing amount of detail with regard to spatial, temporal or process information. Yet, the influence of the choice of data and the degree of detail on the estimated risk is rarely assessed.</span></p><p><span>We estimated flood damage to private households and companies for a flood event in 2013 in Germany using two different approaches to describe the hazard, the exposed objects and their vulnerability towards the hazard with varying levels of detail. One flood map is based on local flood maps computed by the European Joint Research Center not including embankments, while the other flood map was derived especially for this particular flood event. Exposed elements are mapped using the land use based data set BEAM (Basic European Asset Map) and with an object-based approach using OpenStreetMap data. The vulnerability is described by ordinary Stage-Damage-Functions and by tree-based models including additional damage-driving variables. The estimations are validated with reported damage numbers per federal state and compared to each other to quantify the influence of the different data sets at various spatial scales. </span></p><p><span>The results suggest that a stronger focus on exposed elements could improve the reliability of impact estimations considerably. The individual assessment of the influence of the different components on the overall risk points out promising next steps for further investigations.</span></p>


2021 ◽  
Author(s):  
Arsalan Ahmed ◽  
Hadrien Michel ◽  
Wouter Deleersnyder ◽  
David Dudal ◽  
Thomas Hermans

<p>Accurate subsurface imaging through geophysics is of prime importance for many geological and hydrogeological applications. Recently, airborne electromagnetic methods have become more popular because of their potential to quickly acquire large data sets at relevant depths for hydrogeological applications. However, the solution of inversion of airborne EM data is not unique, so that many electrical conductivity models can explain the data. Two families of methods can be applied for inversion: deterministic and stochastic methods. Deterministic (or regularized) approaches are limited in terms of uncertainty quantification as they propose one unique solution according to the chosen regularization term. In contrast, stochastic methods are able to generate many models fitting the data. The most common approach is to use Markov chain Monte Carlo (McMC) Methods. However, the application of stochastic methods, even though more informative than deterministic ones, is rare due to a quite high computational cost.</p><p>In this research, the newly developed approach named Bayesian Evidential Learning 1D imaging (BEL1D) is used to efficiently and stochastically solve the inverse problem. BEL1D is combined to SimPEG: an open source python package, for solving the electromagnetic forward problem. BEL1D bypasses the inversion step, by generating random samples from the prior distribution with defined ranges for the thickness and electrical conductivity of the different layers, simulating the corresponding data and learning a direct statistical relationship between data and model parameters. From this relationship, BEL1D can generate posterior models fitting the field observed data, without additional forward model computations. The output of BEL1D shows the range of uncertainty for subsurface models. It enables to identify which model parameters are the most sensitive and can be accurately estimated from the electromagnetic data.</p><p>The application of BEL1D together with SimPEG for stochastic transient electromagnetic inversion is a very efficient approach, as it allows to estimate the uncertainty at a limited cost. Indeed, only a limited number of training models (typically a few thousands) is required for an accurate prediction. Moreover, the computed training models can be reused for other predictions, considerably reducing the computation cost when dealing with similar data sets. It is thus a promising approach for the inversion of dense data set (such as those collected in airborne surveys). In the future, we plan on relaxing constraints on the model parameters to go towards interpretation of EM data in coastal environment, where transition can be smooth due to salinity variations.</p><p><em>Keywords </em>: EM, Uncertainty, 1D imaging, BEL1D, SimPEG</p>


2019 ◽  
Author(s):  
Nora Helbig ◽  
David Moeser ◽  
Michaela Teich ◽  
Laure Vincent ◽  
Yves Lejeune ◽  
...  

Abstract. Snow interception by forest canopy drives the spatial heterogeneity of subcanopy snow accumulation leading to significant differences between forested and non-forested areas at a variety of scales. Snow intercepted by forest canopy can also drastically change the surface albedo. As such, accuratelly modelling snow interception is of importance for various model applications such as hydrological, weather and climate predictions. Due to difficulties in direct measurements of snow interception, previous empirical snow interception models were developed at just the point scale. The lack of spatially extensive data sets has hindered validation of snow interception models in different snow climates, forest types and at various spatial scales and has reduced accurate representation of snow interception in coarse-scale models. We present two novel models for the spatial mean and one for the standard deviation of snow interception derived from an extensive snow interception data set collected in a spruce forest in the Swiss Alps. Besides open area snow fall, subgrid model input parameters include the standard deviation of the DSM (digital surface models) and the sky view factor, both of which can be easily pre-computed. Validation of both models was performed with snow interception data sets acquired in geographically different locations under disparate weather conditions. Snow interception data sets from the Rocky Mountains, U.S. and the French Alps compared well to modelled snow interception with a NRMSE for the spatial mean of lower equal ≤ 10 % and NRMSE of the standard deviation of lower equal ≤ 13 %. Our results suggest that the proposed snow interception models can be applied in coarse land surface model grid cells provided that a sufficiently fine-scale DSM of the forest is available to derive subgrid forest parameters.


2018 ◽  
Vol 154 (2) ◽  
pp. 149-155
Author(s):  
Michael Archer

1. Yearly records of worker Vespula germanica (Fabricius) taken in suction traps at Silwood Park (28 years) and at Rothamsted Research (39 years) are examined. 2. Using the autocorrelation function (ACF), a significant negative 1-year lag followed by a lesser non-significant positive 2-year lag was found in all, or parts of, each data set, indicating an underlying population dynamic of a 2-year cycle with a damped waveform. 3. The minimum number of years before the 2-year cycle with damped waveform was shown varied between 17 and 26, or was not found in some data sets. 4. Ecological factors delaying or preventing the occurrence of the 2-year cycle are considered.


2018 ◽  
Vol 21 (2) ◽  
pp. 117-124 ◽  
Author(s):  
Bakhtyar Sepehri ◽  
Nematollah Omidikia ◽  
Mohsen Kompany-Zareh ◽  
Raouf Ghavami

Aims & Scope: In this research, 8 variable selection approaches were used to investigate the effect of variable selection on the predictive power and stability of CoMFA models. Materials & Methods: Three data sets including 36 EPAC antagonists, 79 CD38 inhibitors and 57 ATAD2 bromodomain inhibitors were modelled by CoMFA. First of all, for all three data sets, CoMFA models with all CoMFA descriptors were created then by applying each variable selection method a new CoMFA model was developed so for each data set, 9 CoMFA models were built. Obtained results show noisy and uninformative variables affect CoMFA results. Based on created models, applying 5 variable selection approaches including FFD, SRD-FFD, IVE-PLS, SRD-UVEPLS and SPA-jackknife increases the predictive power and stability of CoMFA models significantly. Result & Conclusion: Among them, SPA-jackknife removes most of the variables while FFD retains most of them. FFD and IVE-PLS are time consuming process while SRD-FFD and SRD-UVE-PLS run need to few seconds. Also applying FFD, SRD-FFD, IVE-PLS, SRD-UVE-PLS protect CoMFA countor maps information for both fields.


Author(s):  
Kyungkoo Jun

Background & Objective: This paper proposes a Fourier transform inspired method to classify human activities from time series sensor data. Methods: Our method begins by decomposing 1D input signal into 2D patterns, which is motivated by the Fourier conversion. The decomposition is helped by Long Short-Term Memory (LSTM) which captures the temporal dependency from the signal and then produces encoded sequences. The sequences, once arranged into the 2D array, can represent the fingerprints of the signals. The benefit of such transformation is that we can exploit the recent advances of the deep learning models for the image classification such as Convolutional Neural Network (CNN). Results: The proposed model, as a result, is the combination of LSTM and CNN. We evaluate the model over two data sets. For the first data set, which is more standardized than the other, our model outperforms previous works or at least equal. In the case of the second data set, we devise the schemes to generate training and testing data by changing the parameters of the window size, the sliding size, and the labeling scheme. Conclusion: The evaluation results show that the accuracy is over 95% for some cases. We also analyze the effect of the parameters on the performance.


2019 ◽  
Vol 73 (8) ◽  
pp. 893-901
Author(s):  
Sinead J. Barton ◽  
Bryan M. Hennelly

Cosmic ray artifacts may be present in all photo-electric readout systems. In spectroscopy, they present as random unidirectional sharp spikes that distort spectra and may have an affect on post-processing, possibly affecting the results of multivariate statistical classification. A number of methods have previously been proposed to remove cosmic ray artifacts from spectra but the goal of removing the artifacts while making no other change to the underlying spectrum is challenging. One of the most successful and commonly applied methods for the removal of comic ray artifacts involves the capture of two sequential spectra that are compared in order to identify spikes. The disadvantage of this approach is that at least two recordings are necessary, which may be problematic for dynamically changing spectra, and which can reduce the signal-to-noise (S/N) ratio when compared with a single recording of equivalent duration due to the inclusion of two instances of read noise. In this paper, a cosmic ray artefact removal algorithm is proposed that works in a similar way to the double acquisition method but requires only a single capture, so long as a data set of similar spectra is available. The method employs normalized covariance in order to identify a similar spectrum in the data set, from which a direct comparison reveals the presence of cosmic ray artifacts, which are then replaced with the corresponding values from the matching spectrum. The advantage of the proposed method over the double acquisition method is investigated in the context of the S/N ratio and is applied to various data sets of Raman spectra recorded from biological cells.


Sign in / Sign up

Export Citation Format

Share Document