Generative models for the transfer of knowledge in seismic interpretation with deep learning

2021 ◽  
Vol 40 (7) ◽  
pp. 534-542
Author(s):  
Ricard Durall ◽  
Valentin Tschannen ◽  
Norman Ettrich ◽  
Janis Keuper

Interpreting seismic data requires the characterization of a number of key elements such as the position of faults and main reflections, presence of structural bodies, and clustering of areas exhibiting a similar amplitude versus angle response. Manual interpretation of geophysical data is often a difficult and time-consuming task, complicated by lack of resolution and presence of noise. In recent years, approaches based on convolutional neural networks have shown remarkable results in automating certain interpretative tasks. However, these state-of-the-art systems usually need to be trained in a supervised manner, and they suffer from a generalization problem. Hence, it is highly challenging to train a model that can yield accurate results on new real data obtained with different acquisition, processing, and geology than the data used for training. In this work, we introduce a novel method that combines generative neural networks with a segmentation task in order to decrease the gap between annotated training data and uninterpreted target data. We validate our approach on two applications: the detection of diffraction events and the picking of faults. We show that when transitioning from synthetic training data to real validation data, our workflow yields superior results compared to its counterpart without the generative network.

2021 ◽  
Vol 14 (2) ◽  
pp. 127-135
Author(s):  
Fadhil Yusuf Rahadika ◽  
Novanto Yudistira ◽  
Yuita Arum Sari

During the COVID-19 pandemic, many offline activities are turned into online activities via video meetings to prevent the spread of the COVID 19 virus. In the online video meeting, some micro-interactions are missing when compared to direct social interactions. The use of machines to assist facial expression recognition in online video meetings is expected to increase understanding of the interactions among users. Many studies have shown that CNN-based neural networks are quite effective and accurate in image classification. In this study, some open facial expression datasets were used to train CNN-based neural networks with a total number of training data of 342,497 images. This study gets the best results using ResNet-50 architecture with Mish activation function and Accuracy Booster Plus block. This architecture is trained using the Ranger and Gradient Centralization optimization method for 60000 steps with a batch size of 256. The best results from the training result in accuracy of AffectNet validation data of 0.5972, FERPlus validation data of 0.8636, FERPlus test data of 0.8488, and RAF-DB test data of 0.8879. From this study, the proposed method outperformed plain ResNet in all test scenarios without transfer learning, and there is a potential for better performance with the pre-training model. The code is available at https://github.com/yusufrahadika-facial-expressions-essay.


2020 ◽  
Vol 10 (2) ◽  
pp. 483 ◽  
Author(s):  
Eko Ihsanto ◽  
Kalamullah Ramli ◽  
Dodi Sudiana ◽  
Teddy Surya Gunawan

Many algorithms have been developed for automated electrocardiogram (ECG) classification. Due to the non-stationary nature of the ECG signal, it is rather challenging to use traditional handcraft methods, such as time-based analysis of feature extraction and classification, to pave the way for machine learning implementation. This paper proposed a novel method, i.e., the ensemble of depthwise separable convolutional (DSC) neural networks for the classification of cardiac arrhythmia ECG beats. Using our proposed method, the four stages of ECG classification, i.e., QRS detection, preprocessing, feature extraction, and classification, were reduced to two steps only, i.e., QRS detection and classification. No preprocessing method was required while feature extraction was combined with classification. Moreover, to reduce the computational cost while maintaining its accuracy, several techniques were implemented, including All Convolutional Network (ACN), Batch Normalization (BN), and ensemble convolutional neural networks. The performance of the proposed ensemble CNNs were evaluated using the MIT-BIH arrythmia database. In the training phase, around 22% of the 110,057 beats data extracted from 48 records were utilized. Using only these 22% labeled training data, our proposed algorithm was able to classify the remaining 78% of the database into 16 classes. Furthermore, the sensitivity ( S n ), specificity ( S p ), and positive predictivity ( P p ), and accuracy ( A c c ) are 99.03%, 99.94%, 99.03%, and 99.88%, respectively. The proposed algorithm required around 180 μs, which is suitable for real time application. These results showed that our proposed method outperformed other state of the art methods.


2019 ◽  
Vol 20 (23) ◽  
pp. 6019 ◽  
Author(s):  
Dongliang Guo ◽  
Qiaoqiao Wang ◽  
Meng Liang ◽  
Wei Liu ◽  
Junlan Nie

Cavity analysis in molecular dynamics is important for understanding molecular function. However, analyzing the dynamic pattern of molecular cavities remains a difficult task. In this paper, we propose a novel method to topologically represent molecular cavities by vectorization. First, a characterization of cavities is established through Word2Vec model, based on an analogy between the cavities and natural language processing (NLP) terms. Then, we use some techniques such as dimension reduction and clustering to conduct an exploratory analysis of the vectorized molecular cavity. On a real data set, we demonstrate that our approach is applicable to maintain the topological characteristics of the cavity and can find the change patterns from a large number of cavities.


2020 ◽  
Vol 29 (05) ◽  
pp. 2050013
Author(s):  
Oualid Araar ◽  
Abdenour Amamra ◽  
Asma Abdeldaim ◽  
Ivan Vitanov

Traffic Sign Recognition (TSR) is a crucial component in many automotive applications, such as driver assistance, sign maintenance, and vehicle autonomy. In this paper, we present an efficient approach to training a machine learning-based TSR solution. In our choice of recognition method, we have opted for convolutional neural networks, which have demonstrated best-in-class performance in previous works on TSR. One of the challenges related to training deep neural networks is the requirement for a large amount of training data. To circumvent the tedious process of acquiring and manually labelling real data, we investigate the use of synthetically generated images. Our networks, trained on only synthetic data, are capable of recognising traffic signs in challenging real-world footage. The classification results achieved on the GTSRB benchmark are seen to outperform existing state-of-the-art solutions.


Geophysics ◽  
2001 ◽  
Vol 66 (1) ◽  
pp. 220-236 ◽  
Author(s):  
Daniel P. Hampson ◽  
James S. Schuelke ◽  
John A. Quirein

We describe a new method for predicting well‐log properties from seismic data. The analysis data consist of a series of target logs from wells which tie a 3-D seismic volume. The target logs theoretically may be of any type; however, the greatest success to date has been in predicting porosity logs. From the 3-D seismic volume a series of sample‐based attributes is calculated. The objective is to derive a multiattribute transform, which is a linear or nonlinear transform between a subset of the attributes and the target log values. The selected subset is determined by a process of forward stepwise regression, which derives increasingly larger subsets of attributes. An extension of conventional crossplotting involves the use of a convolutional operator to resolve frequency differences between the target logs and the seismic data. In the linear mode, the transform consists of a series of weights derived by least‐squares minimization. In the nonlinear mode, a neural network is trained, using the selected attributes as inputs. Two types of neural networks have been evaluated: the multilayer feedforward network (MLFN) and the probabilistic neural network (PNN). Because of its mathematical simplicity, the PNN appears to be the network of choice. To estimate the reliability of the derived multiattribute transform, crossvalidation is used. In this process, each well is systematically removed from the training set, and the transform is rederived from the remaining wells. The prediction error for the hidden well is then calculated. The validation error, which is the average error for all hidden wells, is used as a measure of the likely prediction error when the transform is applied to the seismic volume. The method is applied to two real data sets. In each case, we see a continuous improvement in predictive power as we progress from single‐attribute regression to linear multiattribute prediction to neural network prediction. This improvement is evident not only on the training data but, more importantly, on the validation data. In addition, the neural network shows a significant improvement in resolution over that from linear regression.


2020 ◽  
Vol 25 (1) ◽  
pp. 43-50
Author(s):  
Pavlo Radiuk

AbstractThe achievement of high-precision segmentation in medical image analysis has been an active direction of research over the past decade. Significant success in medical imaging tasks has been feasible due to the employment of deep learning methods, including convolutional neural networks (CNNs). Convolutional architectures have been mostly applied to homogeneous medical datasets with separate organs. Nevertheless, the segmentation of volumetric medical images of several organs remains an open question. In this paper, we investigate fully convolutional neural networks (FCNs) and propose a modified 3D U-Net architecture devoted to the processing of computed tomography (CT) volumetric images in the automatic semantic segmentation tasks. To benchmark the architecture, we utilised the differentiable Sørensen-Dice similarity coefficient (SDSC) as a validation metric and optimised it on the training data by minimising the loss function. Our hand-crafted architecture was trained and tested on the manually compiled dataset of CT scans. The improved 3D UNet architecture achieved the average SDSC score of 84.8 % on testing subset among multiple abdominal organs. We also compared our architecture with recognised state-of-the-art results and demonstrated that 3D U-Net based architectures could achieve competitive performance and efficiency in the multi-organ segmentation task.


Author(s):  
Murray Christian ◽  
Ben Murrell

AbstractDuring the emergence of a pandemic, we need to estimate the prevalence of a disease using serological assays whose characterization is incomplete, relying on limited validation data. This introduces uncertainty for which we need to account.In our treatment, the data take the form of continuous assay measurements of antibody response to antigens (eg. ELISA), and fall into two groups. The training data includes the confirmed positive or negative infection status for each sample. The population data includes only the assay measurements, and is assumed to be a random sample from the population from which we estimate the seroprevalence.We use the training data to model the relationship between assay values and infection status, capturing both individual-level uncertainty in infection status, as well as uncertainty due to limited training data. We then estimate the posterior distribution over population prevalence, additionally capturing uncertainty due to finite samples.Finally, we introduce a means to pool information over successive time points, using a Gaussian process, which dramatically reduces the variance of our estimates.The methodological approach we here describe was developed to support the longitudinal characterization of the seroprevalence of COVID-19 in Stockholm, Sweden.


2021 ◽  
pp. 1-28
Author(s):  
Ahmed Abdulhamid Mahmoud ◽  
Salaheldin Elkatatny

Abstract Evaluation of the quality of unconventional hydrocarbon resources becomes a critical stage toward characterizing these resources, this evaluation requires evaluation of the total organic carbon (TOC). Generally, TOC is determined from laboratory experiments, however, it is hard to obtain a continuous profile for the TOC along the drilled formations using these experiments. Another way to evaluate the TOC is through the use of empirical correlation, the currently available correlations lack the accuracy especially when used in formations other than the ones used to develop these correlations. This study introduces an empirical equation for evaluation of the TOC in Devonian Duvernay shale from only gamma-ray and spectral gamma-ray logs of uranium, thorium, and potassium as well as a newly developed term that accounts for the TOC from the linear regression analysis. This new correlation was developed based on the artificial neural networks (ANN) algorithm which was learned on 750 datasets from Well-A. The developed correlation was tested and validated on 226 and 73 datasets from Well-B and Well-C, respectively. The results of this study indicated that for the training data, the TOC was predicted by the ANN with an AAPE of only 8.5%. Using the developed equation, the TOC was predicted with an AAPE of only 11.5% for the testing data. For the validation data, the developed equation overperformed the previous models in estimating the TOC with an AAPE of only 11.9%.


2019 ◽  
Vol 488 (4) ◽  
pp. 5232-5250 ◽  
Author(s):  
Alexander Chaushev ◽  
Liam Raynard ◽  
Michael R Goad ◽  
Philipp Eigmüller ◽  
David J Armstrong ◽  
...  

ABSTRACT Vetting of exoplanet candidates in transit surveys is a manual process, which suffers from a large number of false positives and a lack of consistency. Previous work has shown that convolutional neural networks (CNN) provide an efficient solution to these problems. Here, we apply a CNN to classify planet candidates from the Next Generation Transit Survey (NGTS). For training data sets we compare both real data with injected planetary transits and fully simulated data, as well as how their different compositions affect network performance. We show that fewer hand labelled light curves can be utilized, while still achieving competitive results. With our best model, we achieve an area under the curve (AUC) score of $(95.6\pm {0.2}){{\ \rm per\ cent}}$ and an accuracy of $(88.5\pm {0.3}){{\ \rm per\ cent}}$ on our unseen test data, as well as $(76.5\pm {0.4}){{\ \rm per\ cent}}$ and $(74.6\pm {1.1}){{\ \rm per\ cent}}$ in comparison to our existing manual classifications. The neural network recovers 13 out of 14 confirmed planets observed by NGTS, with high probability. We use simulated data to show that the overall network performance is resilient to mislabelling of the training data set, a problem that might arise due to unidentified, low signal-to-noise transits. Using a CNN, the time required for vetting can be reduced by half, while still recovering the vast majority of manually flagged candidates. In addition, we identify many new candidates with high probabilities which were not flagged by human vetters.


Sensors ◽  
2019 ◽  
Vol 19 (24) ◽  
pp. 5356 ◽  
Author(s):  
Francisco Pastor ◽  
Juan M. Gandarias ◽  
Alfonso J. García-Cerezo ◽  
Jesús M. Gómez-de-Gabriel

In this paper, a novel method of active tactile perception based on 3D neural networks and a high-resolution tactile sensor installed on a robot gripper is presented. A haptic exploratory procedure based on robotic palpation is performed to get pressure images at different grasping forces that provide information not only about the external shape of the object, but also about its internal features. The gripper consists of two underactuated fingers with a tactile sensor array in the thumb. A new representation of tactile information as 3D tactile tensors is described. During a squeeze-and-release process, the pressure images read from the tactile sensor are concatenated forming a tensor that contains information about the variation of pressure matrices along with the grasping forces. These tensors are used to feed a 3D Convolutional Neural Network (3D CNN) called 3D TactNet, which is able to classify the grasped object through active interaction. Results show that 3D CNN performs better, and provide better recognition rates with a lower number of training data.


Sign in / Sign up

Export Citation Format

Share Document