NEURAL FUZZY PREFERENCE INTEGRATION USING NEURAL PREFERENCE MOORE MACHINES

2000 ◽  
Vol 10 (04) ◽  
pp. 287-309 ◽  
Author(s):  
STEFAN WERMTER

This paper describes preference classes and preference Moore machines as a basis for integrating different hybrid neural representations. Preference classes are shown to provide a basic link between neural preferences and fuzzy representations at the preference class level. Preference Moore machines provide a link between recurrent neural networks and symbolic transducers at the preference Moore machine level. We demonstrate how the concepts of preference classes and preference Moore machines can be used to interpret neural network representations and to integrate knowledge from hybrid neural representations. One main contribution of this paper is the introduction and analysis of neural preference Moore machines and their link to a fuzzy interpretation. Furthermore, we illustrate the interpretation and combination of various neural preference Moore machines with additional real-world examples.

2004 ◽  
Vol 213 ◽  
pp. 483-486
Author(s):  
David Brodrick ◽  
Douglas Taylor ◽  
Joachim Diederich

A recurrent neural network was trained to detect the time-frequency domain signature of narrowband radio signals against a background of astronomical noise. The objective was to investigate the use of recurrent networks for signal detection in the Search for Extra-Terrestrial Intelligence, though the problem is closely analogous to the detection of some classes of Radio Frequency Interference in radio astronomy.


2019 ◽  
Author(s):  
Stefan L. Frank ◽  
John Hoeks

Recurrent neural network (RNN) models of sentence processing have recently displayed a remarkable ability to learn aspects of structure comprehension, as evidenced by their ability to account for reading times on sentences with local syntactic ambiguities (i.e., garden-path effects). Here, we investigate if these models can also simulate the effect of semantic appropriateness of the ambiguity's readings. RNNs-based estimates of surprisal of the disambiguating verb of sentences with an NP/S-coordination ambiguity (as in `The wizard guards the king and the princess protects ...') show identical patters to human reading times on the same sentences: Surprisal is higher on ambiguous structures than on their disambiguated counterparts and this effect is weaker, but not absent, in cases of poor thematic fit between the verb and its potential object (`The teacher baked the cake and the baker made ...'). These results show that an RNN is able to simultaneously learn about structural and semantic relations between words and suggest that garden-path phenomena may be more closely related to word predictability than traditionally assumed.


Inventions ◽  
2021 ◽  
Vol 6 (4) ◽  
pp. 70
Author(s):  
Elena Solovyeva ◽  
Ali Abdullah

In this paper, the structure of a separable convolutional neural network that consists of an embedding layer, separable convolutional layers, convolutional layer and global average pooling is represented for binary and multiclass text classifications. The advantage of the proposed structure is the absence of multiple fully connected layers, which is used to increase the classification accuracy but raises the computational cost. The combination of low-cost separable convolutional layers and a convolutional layer is proposed to gain high accuracy and, simultaneously, to reduce the complexity of neural classifiers. Advantages are demonstrated at binary and multiclass classifications of written texts by means of the proposed networks under the sigmoid and Softmax activation functions in convolutional layer. At binary and multiclass classifications, the accuracy obtained by separable convolutional neural networks is higher in comparison with some investigated types of recurrent neural networks and fully connected networks.


SINERGI ◽  
2020 ◽  
Vol 24 (1) ◽  
pp. 29
Author(s):  
Widi Aribowo

Load shedding plays a key part in the avoidance of the power system outage. The frequency and voltage fluidity leads to the spread of a power system into sub-systems and leads to the outage as well as the severe breakdown of the system utility.  In recent years, Neural networks have been very victorious in several signal processing and control applications.  Recurrent Neural networks are capable of handling complex and non-linear problems. This paper provides an algorithm for load shedding using ELMAN Recurrent Neural Networks (RNN). Elman has proposed a partially RNN, where the feedforward connections are modifiable and the recurrent connections are fixed. The research is implemented in MATLAB and the performance is tested with a 6 bus system. The results are compared with the Genetic Algorithm (GA), Combining Genetic Algorithm with Feed Forward Neural Network (hybrid) and RNN. The proposed method is capable of assigning load releases needed and more efficient than other methods. 


2021 ◽  
pp. 1-43
Author(s):  
Alfred Rajakumar ◽  
John Rinzel ◽  
Zhe S. Chen

Abstract Recurrent neural networks (RNNs) have been widely used to model sequential neural dynamics (“neural sequences”) of cortical circuits in cognitive and motor tasks. Efforts to incorporate biological constraints and Dale's principle will help elucidate the neural representations and mechanisms of underlying circuits. We trained an excitatory-inhibitory RNN to learn neural sequences in a supervised manner and studied the representations and dynamic attractors of the trained network. The trained RNN was robust to trigger the sequence in response to various input signals and interpolated a time-warped input for sequence representation. Interestingly, a learned sequence can repeat periodically when the RNN evolved beyond the duration of a single sequence. The eigenspectrum of the learned recurrent connectivity matrix with growing or damping modes, together with the RNN's nonlinearity, were adequate to generate a limit cycle attractor. We further examined the stability of dynamic attractors while training the RNN to learn two sequences. Together, our results provide a general framework for understanding neural sequence representation in the excitatory-inhibitory RNN.


Geophysics ◽  
2019 ◽  
Vol 85 (1) ◽  
pp. U21-U29
Author(s):  
Gabriel Fabien-Ouellet ◽  
Rahul Sarkar

Applying deep learning to 3D velocity model building remains a challenge due to the sheer volume of data required to train large-scale artificial neural networks. Moreover, little is known about what types of network architectures are appropriate for such a complex task. To ease the development of a deep-learning approach for seismic velocity estimation, we have evaluated a simplified surrogate problem — the estimation of the root-mean-square (rms) and interval velocity in time from common-midpoint gathers — for 1D layered velocity models. We have developed a deep neural network, whose design was inspired by the information flow found in semblance analysis. The network replaces semblance estimation by a representation built with a deep convolutional neural network, and then it performs velocity estimation automatically with recurrent neural networks. The network is trained with synthetic data to identify primary reflection events, rms velocity, and interval velocity. For a synthetic test set containing 1D layered models, we find that rms and interval velocity are accurately estimated, with an error of less than [Formula: see text] for the rms velocity. We apply the neural network to a real 2D marine survey and obtain accurate rms velocity predictions leading to a coherent stacked section, in addition to an estimation of the interval velocity that reproduces the main structures in the stacked section. Our results provide strong evidence that neural networks can estimate velocity from seismic data and that good performance can be achieved on real data even if the training is based on synthetics. The findings for the 1D problem suggest that deep convolutional encoders and recurrent neural networks are promising components of more complex networks that can perform 2D and 3D velocity model building.


2019 ◽  
Vol 9 (16) ◽  
pp. 3391 ◽  
Author(s):  
Santiago Pascual ◽  
Joan Serrà ◽  
Antonio Bonafonte

Conversion from text to speech relies on the accurate mapping from linguistic to acoustic symbol sequences, for which current practice employs recurrent statistical models such as recurrent neural networks. Despite the good performance of such models (in terms of low distortion in the generated speech), their recursive structure with intermediate affine transformations tends to make them slow to train and to sample from. In this work, we explore two different mechanisms that enhance the operational efficiency of recurrent neural networks, and study their performance–speed trade-off. The first mechanism is based on the quasi-recurrent neural network, where expensive affine transformations are removed from temporal connections and placed only on feed-forward computational directions. The second mechanism includes a module based on the transformer decoder network, designed without recurrent connections but emulating them with attention and positioning codes. Our results show that the proposed decoder networks are competitive in terms of distortion when compared to a recurrent baseline, whilst being significantly faster in terms of CPU and GPU inference time. The best performing model is the one based on the quasi-recurrent mechanism, reaching the same level of naturalness as the recurrent neural network based model with a speedup of 11.2 on CPU and 3.3 on GPU.


2009 ◽  
Vol 21 (11) ◽  
pp. 3214-3227
Author(s):  
James Ting-Ho Lo

By a fundamental neural filtering theorem, a recurrent neural network with fixed weights is known to be capable of adapting to an uncertain environment. This letter reports some mathematical results on the performance of such adaptation for series-parallel identification of a dynamical system as compared with the performance of the best series-parallel identifier possible under the assumption that the precise value of the uncertain environmental process is given. In short, if an uncertain environmental process is observable (not necessarily constant) from the output of a dynamical system or constant (not necessarily observable), then a recurrent neural network exists as a series-parallel identifier of the dynamical system whose output approaches the output of an optimal series-parallel identifier using the environmental process as an additional input.


Sign in / Sign up

Export Citation Format

Share Document