scholarly journals Generalized Born radii computation using linear models and neural networks

2019 ◽  
Vol 36 (6) ◽  
pp. 1757-1764
Author(s):  
Saida Saad Mohamed Mahmoud ◽  
Gennaro Esposito ◽  
Giuseppe Serra ◽  
Federico Fogolari

Abstract Motivation Implicit solvent models play an important role in describing the thermodynamics and the dynamics of biomolecular systems. Key to an efficient use of these models is the computation of generalized Born (GB) radii, which is accomplished by algorithms based on the electrostatics of inhomogeneous dielectric media. The speed and accuracy of such computations are still an issue especially for their intensive use in classical molecular dynamics. Here, we propose an alternative approach that encodes the physics of the phenomena and the chemical structure of the molecules in model parameters which are learned from examples. Results GB radii have been computed using (i) a linear model and (ii) a neural network. The input is the element, the histogram of counts of neighbouring atoms, divided by atom element, within 16 Å. Linear models are ca. 8 times faster than the most widely used reference method and the accuracy is higher with correlation coefficient with the inverse of ‘perfect’ GB radii of 0.94 versus 0.80 of the reference method. Neural networks further improve the accuracy of the predictions with correlation coefficient with ‘perfect’ GB radii of 0.97 and ca. 20% smaller root mean square error. Availability and implementation We provide a C program implementing the computation using the linear model, including the coefficients appropriate for the set of Bondi radii, as Supplementary Material. We also provide a Python implementation of the neural network model with parameter and example files in the Supplementary Material as well. Supplementary information Supplementary data are available at Bioinformatics online.

2019 ◽  
Vol 11 (4) ◽  
pp. 1 ◽  
Author(s):  
Tobias de Taillez ◽  
Florian Denk ◽  
Bojana Mirkovic ◽  
Birger Kollmeier ◽  
Bernd T. Meyer

Diferent linear models have been proposed to establish a link between an auditory stimulus and the neurophysiological response obtained through electroencephalography (EEG). We investigate if non-linear mappings can be modeled with deep neural networks trained on continuous speech envelopes and EEG data obtained in an auditory attention two-speaker scenario. An artificial neural network was trained to predict the EEG response related to the attended and unattended speech envelopes. After training, the properties of the DNN-based model are analyzed by measuring the transfer function between input envelopes and predicted EEG signals by using click-like stimuli and frequency sweeps as input patterns. Using sweep responses allows to separate the linear and nonlinear response components also with respect to attention. The responses from the model trained on normal speech resemble event-related potentials despite the fact that the DNN was not trained to reproduce such patterns. These responses are modulated by attention, since we obtain significantly lower amplitudes at latencies of 110 ms, 170 ms and 300 ms after stimulus presentation for unattended processing in contrast to the attended. The comparison of linear and nonlinear components indicates that the largest contribution arises from linear processing (75%), while the remaining 25% are attributed to nonlinear processes in the model. Further, a spectral analysis showed a stronger 5 Hz component in modeled EEG for attended in contrast to unattended predictions. The results indicate that the artificial neural network produces responses consistent with recent findings and presents a new approach for quantifying the model properties.


2020 ◽  
Vol 36 (11) ◽  
pp. 3537-3548
Author(s):  
Nova F Smedley ◽  
Suzie El-Saden ◽  
William Hsu

Abstract Motivation Cancer heterogeneity is observed at multiple biological levels. To improve our understanding of these differences and their relevance in medicine, approaches to link organ- and tissue-level information from diagnostic images and cellular-level information from genomics are needed. However, these ‘radiogenomic’ studies often use linear or shallow models, depend on feature selection, or consider one gene at a time to map images to genes. Moreover, no study has systematically attempted to understand the molecular basis of imaging traits based on the interpretation of what the neural network has learned. These studies are thus limited in their ability to understand the transcriptomic drivers of imaging traits, which could provide additional context for determining clinical outcomes. Results We present a neural network-based approach that takes high-dimensional gene expression data as input and performs non-linear mapping to an imaging trait. To interpret the models, we propose gene masking and gene saliency to extract learned relationships from radiogenomic neural networks. In glioblastoma patients, our models outperformed comparable classifiers (>0.10 AUC) and our interpretation methods were validated using a similar model to identify known relationships between genes and molecular subtypes. We found that tumor imaging traits had specific transcription patterns, e.g. edema and genes related to cellular invasion, and 10 radiogenomic traits were significantly predictive of survival. We demonstrate that neural networks can model transcriptomic heterogeneity to reflect differences in imaging and can be used to derive radiogenomic traits with clinical value. Availability and implementation https://github.com/novasmedley/deepRadiogenomics. Contact [email protected] Supplementary information Supplementary data are available at Bioinformatics online.


2020 ◽  
Vol 10 (10) ◽  
pp. 3358 ◽  
Author(s):  
Jiyuan Song ◽  
Aibin Zhu ◽  
Yao Tu ◽  
Hu Huang ◽  
Muhammad Affan Arif ◽  
...  

In response to the need for an exoskeleton to quickly identify the wearer’s movement mode in the mixed control mode, this paper studies the impact of different feature parameters of the surface electromyography (sEMG) signal on the accuracy of human motion pattern recognition using multilayer perceptrons and long short-term memory (LSTM) neural networks. The sEMG signals are extracted from the seven common human motion patterns in daily life, and the time domain and frequency domain features are extracted to build a feature parameter dataset for training the classifier. Recognition of human lower extremity movement patterns based on multilayer perceptrons and the LSTM neural network were carried out, and the final recognition accuracy rates of different feature parameters and different classifier model parameters were compared in the process of establishing the dataset. The experimental results show that the best accuracy rate of human motion pattern recognition using multilayer perceptrons is 95.53%, and the best accuracy rate of human motion pattern recognition using the LSTM neural network is 96.57%.


2021 ◽  
Vol 2 (2) ◽  
pp. 95-102
Author(s):  
Dmitry Yu. Kushnir ◽  
Nikolay N. Velker ◽  
Darya V. Andornaya ◽  
Yuriy E. Antonov

Accurate real-time estimation of a distance to the nearest bed boundary simplifies the steering of directional wells. For estimation of that distance, we propose an approach of pointwise inversion of resistivity data using neural networks based on two-layer resistivity formation model. The model parameters are determined from the tool responses using a cascade of neural networks. The first network calculates the resistivity of the layer containing the tool measure point. The subsequent networks take as input the tool responses and the model parameters determined with the previous networks. All networks are trained on the same synthetic database. The samples of that database consist of the pairs of model parameters and corresponding noisy tool responses. The results of the proposed approach are close to the results of the general inversion algorithm based on the method of the most-probable parameter combination. At the same time, the performance of the proposed inversion is several orders faster.


Author(s):  
C Anand

Several intelligent data mining approaches, including neural networks, have been widely employed by academics during the last decade. In today's rapidly evolving economy, stock market data prediction and analysis play a significant role. Several non-linear models like neural network, generalized autoregressive conditional heteroskedasticity (GARCH) and autoregressive conditional heteroscedasticity (ARCH) as well as linear models like Auto-Regressive Integrated Moving Average (ARIMA), Moving Average (MA) and Auto Regressive (AR) may be used for stock forecasting. The deep learning architectures inclusive of Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), Recurrent Neural Networks (RNN), Multilayer Perceptron (MLP) and Support Vector Machine (SVM) are used in this paper for stock price prediction of an organization by using the previously available stock prices. The National Stock Exchange (NSE) of India dataset is used for training the model with day-wise closing price. Data prediction is performed for a few sample companies selected on a random basis. Based on the comparison results, it is evident that the existing models are outperformed by CNN. The network can also perform stock predictions for other stock markets despite being trained with single market data as a common inner dynamics that has been shared between certain stock markets. When compared to the existing linear models, the neural network model outperforms them in a significant manner, which can be observed from the comparison results.


2019 ◽  
Author(s):  
Kevin Berlemont ◽  
Jean-Rémy Martin ◽  
Jérôme Sackur ◽  
Jean-Pierre Nadal

ABSTRACTElectrophysiological recordings during perceptual decision tasks in monkeys suggest that the degree of confidence in a decision is based on a simple neural signal produced by the neural decision process. Attractor neural networks provide an appropriate biophysical modeling framework, and account for the experimental results very well. However, it remains unclear whether attractor neural networks can account for confidence reports in humans. We present the results from an experiment in which participants are asked to perform an orientation discrimination task, followed by a confidence judgment. Here we show that an attractor neural network model quantitatively reproduces, for each participant, the relations between accuracy, response times and confidence. We show that the attractor neural network also accounts for confidence-specific sequential effects observed in the experiment (participants are faster on trials following high confidence trials). Remarkably, this is obtained as an inevitable outcome of the network dynamics, without any feedback specific to the previous decision (that would result in, e.g., a change in the model parameters before the onset of the next trial). Our results thus suggest that a metacognitive process such as confidence in one’s decision is linked to the intrinsically nonlinear dynamics of the decision-making neural network.


2019 ◽  
Vol 490 (1) ◽  
pp. 371-384 ◽  
Author(s):  
Aristide Doussot ◽  
Evan Eames ◽  
Benoit Semelin

ABSTRACT Within the next few years, the Square Kilometre Array (SKA) or one of its pathfinders will hopefully detect the 21-cm signal fluctuations from the Epoch of Reionization (EoR). Then, the goal will be to accurately constrain the underlying astrophysical parameters. Currently, this is mainly done with Bayesian inference. Recently, neural networks have been trained to perform inverse modelling and, ideally, predict the maximum-likelihood values of the model parameters. We build on these by improving the accuracy of the predictions using several supervised learning methods: neural networks, kernel regressions, or ridge regressions. Based on a large training set of 21-cm power spectra, we compare the performances of these methods. When using a noise-free signal generated by the model itself as input, we improve on previous neural network accuracy by one order of magnitude and, using a local ridge kernel regression, we gain another factor of a few. We then reach an accuracy level on the reconstruction of the maximum-likelihood parameter values of a few per cents compared the 1σ confidence level due to SKA thermal noise (as estimated with Bayesian inference). For an input signal affected by an SKA-like thermal noise but constrained to yield the same maximum-likelihood parameter values as the noise-free signal, our neural network exhibits an error within half of the 1σ confidence level due to the SKA thermal noise. This accuracy improves to 10$\, {\rm per\, cent}$ of the 1σ level when using the local ridge kernel. We are thus reaching a performance level where supervised learning methods are a viable alternative to determine the maximum-likelihood parameters values.


2018 ◽  
Vol 35 (13) ◽  
pp. 2226-2234 ◽  
Author(s):  
Ameen Eetemadi ◽  
Ilias Tagkopoulos

Abstract Motivation Gene expression prediction is one of the grand challenges in computational biology. The availability of transcriptomics data combined with recent advances in artificial neural networks provide an unprecedented opportunity to create predictive models of gene expression with far reaching applications. Results We present the Genetic Neural Network (GNN), an artificial neural network for predicting genome-wide gene expression given gene knockouts and master regulator perturbations. In its core, the GNN maps existing gene regulatory information in its architecture and it uses cell nodes that have been specifically designed to capture the dependencies and non-linear dynamics that exist in gene networks. These two key features make the GNN architecture capable to capture complex relationships without the need of large training datasets. As a result, GNNs were 40% more accurate on average than competing architectures (MLP, RNN, BiRNN) when compared on hundreds of curated and inferred transcription modules. Our results argue that GNNs can become the architecture of choice when building predictors of gene expression from exponentially growing corpus of genome-wide transcriptomics data. Availability and implementation https://github.com/IBPA/GNN Supplementary information Supplementary data are available at Bioinformatics online.


1994 ◽  
Vol 6 (4) ◽  
pp. 718-738 ◽  
Author(s):  
Gary M. Scott ◽  
W. Harmon Ray

The KBANN (Knowledge-Based Artificial Neural Networks) approach uses neural networks to refine knowledge that can be written in the form of simple propositional rules. This idea is extended by presenting the MANNIDENT (Multivariable Artificial Neural Network Identification) algorithm by which the mathematical equations of linear dynamic process models determine the topology and initial weights of a network, which is further trained using backpropagation. This method is applied to the task of modeling a nonisothermal chemical reactor in which a first-order exothermic reaction is occurring. This method produces statistically significant gains in accuracy over both a standard neural network approach and a linear model. Furthermore, using the approximate linear model to initialize the weights of the network produces statistically less variation in model fidelity. By structuring the neural network according to the approximate linear model, the model can be readily interpreted.


Author(s):  
Mimin Hendriani ◽  
Rais ◽  
Lilies Handayani

Backpropagation is one of the supervised training methods that causes an error in the output produced. Backpropagation neural networks will be carried out in 3 stages, namely feedforward from input training patterns, backpropagation from errors related to adjustment of weights. Updating the weight is done when the training results obtained have not been converged. The value of the goal error (MSE) is 0.0070579 which is achieved at epochs to 99994 from the provisions of 100000 iterations. Based on the plot regression, the training data resulted in a correlation coefficient value of up to 0.55321. The correlation coefficient value is concluded that the greater the R value produced, the better the level of accuracy in face identification carried out in this study


Sign in / Sign up

Export Citation Format

Share Document