scholarly journals A Study of $\tau^-$ and $\mu^-$ in the Field of Nuclei Using Neural Network Techniques

2021 ◽  
Vol 12 ◽  
pp. 130
Author(s):  
N. Panagiotides ◽  
T. S. Kosmas

The rate of a heavy lepton (muon or tau) capture by nuclei as well as the heavy lepton to electron conversion rate can be calculated when the heavy lepton wavefunction is known. Analytical calculation of the wavefunction of any of these leptons around any nucleus is not feasible owning to their small Bohr radii, on the one hand, and to the finite nuclear extend on the other. A new numerical calculation algorithm is proposed hereby, which makes use of the concept of neural networks. The main advantage of this new technique is that the wave function is produced analytically as a sum of sigmoid functions.

Author(s):  
Valerii Dmitrienko ◽  
Sergey Leonov ◽  
Mykola Mezentsev

The idea of ​​Belknap's four-valued logic is that modern computers should function normally not only with the true values ​​of the input information, but also under the conditions of inconsistency and incompleteness of true failures. Belknap's logic introduces four true values: T (true - true), F (false - false), N (none - nobody, nothing, none), B (both - the two, not only the one but also the other).  For ease of work with these true values, the following designations are introduced: (1, 0, n, b). Belknap's logic can be used to obtain estimates of proximity measures for discrete objects, for which the functions Jaccard and Needhem, Russel and Rao, Sokal and Michener, Hamming, etc. are used. In this case, it becomes possible to assess the proximity, recognition and classification of objects in conditions of uncertainty when the true values ​​are taken from the set (1, 0, n, b). Based on the architecture of the Hamming neural network, neural networks have been developed that allow calculating the distances between objects described using true values ​​(1, 0, n, b). Keywords: four-valued Belknap logic, Belknap computer, proximity assessment, recognition and classification, proximity function, neural network.


Author(s):  
Longzhu Xiao ◽  
Siuming Lo ◽  
Jiangping Zhou ◽  
Jixiang Liu ◽  
Linchuan Yang

Vibrancy is one of the most desirable outcomes of transit-oriented development (TOD). The vibrancy of a metro station area (MSA) depends partially on the MSA’s built-environment features. Predicting an MSA’s vibrancy with its built-environment features is of great interest to decision makers as these features are often modifiable by public interventions. However, little has been done on MSAs’ vibrancy in existing studies. On the one hand, seldom has the vibrancy of MSAs been explicitly explored, and measuring the vibrancy is essential. On the other hand, because MSAs are interconnected, one MSA’s vibrancy depends on the MSA’s features and those of relevant MSAs. Hence, selecting a suitable metric that quantifies spatial relationships between MSAs can better predict MSAs’ vibrancy. In this study, we identify four single-dimensional vibrancy proxies and fuse them into an integrated index. Moreover, we design a two-layer graph convolutional neural network model that accounts for both the built-environment features of MSAs and spatial relationships between MSAs. We employ the model in an empirical study in Shenzhen, China, and illustrate (1) how different metrics of spatial relationships influence the prediction of MSAs’ vibrancy; (2) how the predictability varies across single-dimensional and integrated proxies of MSAs’ vibrancy; and (3) how the findings of this study can be used to enlighten decision makers. This study enriches our understandings of spatial relationships between MSAs. Moreover, it can help decision makers with targeted policies for developing MSAs towards TOD.


Author(s):  
Mingmin Zhen ◽  
Jinglu Wang ◽  
Lei Zhou ◽  
Tian Fang ◽  
Long Quan

Semantic segmentation is pixel-wise classification which retains critical spatial information. The “feature map reuse” has been commonly adopted in CNN based approaches to take advantage of feature maps in the early layers for the later spatial reconstruction. Along this direction, we go a step further by proposing a fully dense neural network with an encoderdecoder structure that we abbreviate as FDNet. For each stage in the decoder module, feature maps of all the previous blocks are adaptively aggregated to feedforward as input. On the one hand, it reconstructs the spatial boundaries accurately. On the other hand, it learns more efficiently with the more efficient gradient backpropagation. In addition, we propose the boundary-aware loss function to focus more attention on the pixels near the boundary, which boosts the “hard examples” labeling. We have demonstrated the best performance of the FDNet on the two benchmark datasets: PASCAL VOC 2012, NYUDv2 over previous works when not considering training on other datasets.


Energies ◽  
2022 ◽  
Vol 15 (2) ◽  
pp. 588
Author(s):  
Felipe Leite Coelho da Silva ◽  
Kleyton da Costa ◽  
Paulo Canas Rodrigues ◽  
Rodrigo Salas ◽  
Javier Linkolk López-Gonzales

Forecasting the industry’s electricity consumption is essential for energy planning in a given country or region. Thus, this study aims to apply time-series forecasting models (statistical approach and artificial neural network approach) to the industrial electricity consumption in the Brazilian system. For the statistical approach, the Holt–Winters, SARIMA, Dynamic Linear Model, and TBATS (Trigonometric Box–Cox transform, ARMA errors, Trend, and Seasonal components) models were considered. For the approach of artificial neural networks, the NNAR (neural network autoregression) and MLP (multilayer perceptron) models were considered. The results indicate that the MLP model was the one that obtained the best forecasting performance for the electricity consumption of the Brazilian industry under analysis.


2020 ◽  
Vol 15 ◽  
pp. 258
Author(s):  
S. Athanasopoulos ◽  
E. Mavrommatis ◽  
K. A. Gernoth ◽  
J. W. Clark

We evaluate the location of the proton drip line in the regions 31≤Z≤49 and 73≤Z≤91 based on the one- and two-proton separation energies predicted by our latest Hybrid Mass Model. The latter is constructed by complementing the mass-excess values ΔM predicted by the Finite Range Droplet Model (FRDM) of Moeller et al. with a neural network model trained to predict the differences ΔMexp − ΔMFRDM between these values and the experimental mass-excess values published in the 2003 Atomic Mass Evaluation AME03.


Author(s):  
Tahani Aljohani ◽  
Alexandra I. Cristea

Massive Open Online Courses (MOOCs) have become universal learning resources, and the COVID-19 pandemic is rendering these platforms even more necessary. In this paper, we seek to improve Learner Profiling (LP), i.e. estimating the demographic characteristics of learners in MOOC platforms. We have focused on examining models which show promise elsewhere, but were never examined in the LP area (deep learning models) based on effective textual representations. As LP characteristics, we predict here the employment status of learners. We compare sequential and parallel ensemble deep learning architectures based on Convolutional Neural Networks and Recurrent Neural Networks, obtaining an average high accuracy of 96.3% for our best method. Next, we predict the gender of learners based on syntactic knowledge from the text. We compare different tree-structured Long-Short-Term Memory models (as state-of-the-art candidates) and provide our novel version of a Bi-directional composition function for existing architectures. In addition, we evaluate 18 different combinations of word-level encoding and sentence-level encoding functions. Based on these results, we show that our Bi-directional model outperforms all other models and the highest accuracy result among our models is the one based on the combination of FeedForward Neural Network and the Stack-augmented Parser-Interpreter Neural Network (82.60% prediction accuracy). We argue that our prediction models recommended for both demographics characteristics examined in this study can achieve high accuracy. This is additionally also the first time a sound methodological approach toward improving accuracy for learner demographics classification on MOOCs was proposed.


2014 ◽  
Vol 651-653 ◽  
pp. 1772-1775
Author(s):  
Wei Gong

The abilities of summarization, learning and self-fitting and inner-parallel computing make artificial neural networks suitable for intrusion detection. On the other hand, data fusion based IDS has been used to solve the problem of distorting rate and failing-to-report rate and improve its performance. However, multi-sensor input-data makes the IDS lose its efficiency. The research of neural network based data fusion IDS tries to combine the strong process ability of neural network with the advantages of data fusion IDS. A neural network is designed to realize the data fusion and intrusion analysis and Pruning algorithm of neural networks is used for filtering information from multi-sensors. In the process of intrusion analysis pruning algorithm of neural networks is used for filtering information from multi-sensors so as to increase its performance and save the bandwidth of networks.


2000 ◽  
Vol 31 (4) ◽  
pp. 137-140
Author(s):  
Amine Bensaid ◽  
Bouchra Bouqata ◽  
Ralph Palliam

There are numerous methods for estimating forward interest rates as well as many studies testing the accuracy of these methods. The approach proposed in this study is similar to the one in previous works in two respects: firstly, a Monte Carlo simulation is used instead of empirical data to circumvent empirical difficulties: and secondly, in this study, accuracy is measured by estimating the forward rates rather than by exploring bond prices. This is more consistent with user objectives. The method presented here departs from the others in that it uses a Recurrent Artificial Neural Network (RANN) as an alternative technique for forecasting forward interest rates. Its performance is compared to that of a recursive method which has produced some of the best results in previous studies for forecasting forward interest rates.


2019 ◽  
Vol 9 (16) ◽  
pp. 3391 ◽  
Author(s):  
Santiago Pascual ◽  
Joan Serrà ◽  
Antonio Bonafonte

Conversion from text to speech relies on the accurate mapping from linguistic to acoustic symbol sequences, for which current practice employs recurrent statistical models such as recurrent neural networks. Despite the good performance of such models (in terms of low distortion in the generated speech), their recursive structure with intermediate affine transformations tends to make them slow to train and to sample from. In this work, we explore two different mechanisms that enhance the operational efficiency of recurrent neural networks, and study their performance–speed trade-off. The first mechanism is based on the quasi-recurrent neural network, where expensive affine transformations are removed from temporal connections and placed only on feed-forward computational directions. The second mechanism includes a module based on the transformer decoder network, designed without recurrent connections but emulating them with attention and positioning codes. Our results show that the proposed decoder networks are competitive in terms of distortion when compared to a recurrent baseline, whilst being significantly faster in terms of CPU and GPU inference time. The best performing model is the one based on the quasi-recurrent mechanism, reaching the same level of naturalness as the recurrent neural network based model with a speedup of 11.2 on CPU and 3.3 on GPU.


2018 ◽  
Vol 28 (05) ◽  
pp. 1750021 ◽  
Author(s):  
Alessandra M. Soares ◽  
Bruno J. T. Fernandes ◽  
Carmelo J. A. Bastos-Filho

The Pyramidal Neural Networks (PNN) are an example of a successful recently proposed model inspired by the human visual system and deep learning theory. PNNs are applied to computer vision and based on the concept of receptive fields. This paper proposes a variation of PNN, named here as Structured Pyramidal Neural Network (SPNN). SPNN has self-adaptive variable receptive fields, while the original PNNs rely on the same size for the fields of all neurons, which limits the model since it is not possible to put more computing resources in a particular region of the image. Another limitation of the original approach is the need to define values for a reasonable number of parameters, which can turn difficult the application of PNNs in contexts in which the user does not have experience. On the other hand, SPNN has a fewer number of parameters. Its structure is determined using a novel method with Delaunay Triangulation and k-means clustering. SPNN achieved better results than PNNs and similar performance when compared to Convolutional Neural Network (CNN) and Support Vector Machine (SVM), but using lower memory capacity and processing time.


Sign in / Sign up

Export Citation Format

Share Document