scholarly journals Deep neural networks for waves assisted by the Wiener–Hopf method

Author(s):  
Xun Huang

In this work, the classical Wiener–Hopf method is incorporated into the emerging deep neural networks for the study of certain wave problems. The essential idea is to use the first-principle-based analytical method to efficiently produce a large volume of datasets that would supervise the learning of data-hungry deep neural networks, and to further explain the working mechanisms on underneath. To demonstrate such a combinational research strategy, a deep feed-forward network is first used to approximate the forward propagation model of a duct acoustic problem, which can find important aerospace applications in aeroengine noise tests. Next, a convolutional type U-net is developed to learn spatial derivatives in wave equations, which could help to promote computational paradigm in mathematical physics and engineering applications. A couple of extensions of the U-net architecture are proposed to further impose possible physical constraints. Finally, after giving the implementation details, the performance of the neural networks are studied by comparing with analytical solutions from the Wiener–Hopf method. Overall, the Wiener–Hopf method is used here from a totally new perspective and such a combinational research strategy shall represent the key achievement of this work.

Author(s):  
Giovanni Acampora ◽  
Roberto Schiattarella

AbstractQuantum computers have become reality thanks to the effort of some majors in developing innovative technologies that enable the usage of quantum effects in computation, so as to pave the way towards the design of efficient quantum algorithms to use in different applications domains, from finance and chemistry to artificial and computational intelligence. However, there are still some technological limitations that do not allow a correct design of quantum algorithms, compromising the achievement of the so-called quantum advantage. Specifically, a major limitation in the design of a quantum algorithm is related to its proper mapping to a specific quantum processor so that the underlying physical constraints are satisfied. This hard problem, known as circuit mapping, is a critical task to face in quantum world, and it needs to be efficiently addressed to allow quantum computers to work correctly and productively. In order to bridge above gap, this paper introduces a very first circuit mapping approach based on deep neural networks, which opens a completely new scenario in which the correct execution of quantum algorithms is supported by classical machine learning techniques. As shown in experimental section, the proposed approach speeds up current state-of-the-art mapping algorithms when used on 5-qubits IBM Q processors, maintaining suitable mapping accuracy.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Rama K. Vasudevan ◽  
Maxim Ziatdinov ◽  
Lukas Vlcek ◽  
Sergei V. Kalinin

AbstractDeep neural networks (‘deep learning’) have emerged as a technology of choice to tackle problems in speech recognition, computer vision, finance, etc. However, adoption of deep learning in physical domains brings substantial challenges stemming from the correlative nature of deep learning methods compared to the causal, hypothesis driven nature of modern science. We argue that the broad adoption of Bayesian methods incorporating prior knowledge, development of solutions with incorporated physical constraints and parsimonious structural descriptors and generative models, and ultimately adoption of causal models, offers a path forward for fundamental and applied research.


Author(s):  
James M. Shine ◽  
Mike Li ◽  
Oluwasanmi Koyejo ◽  
Ben Fulcher ◽  
Joseph T. Lizier

AbstractThe algorithmic rules that define deep neural networks are clearly defined, however the principles that define their performance remain poorly understood. Here, we use systems neuroscience and information theoretic approaches to analyse a feedforward neural network as it is trained to classify handwritten digits. By tracking the topology of the network as it learns, we identify three distinct phases of topological reconfiguration. Each phase brings the connections of the neural network into alignment with patterns of information contained in the input dataset, as well as the preceding layers. Performing dimensionality reduction on the data reveals a process of low-dimensional category separation as a function of learning. Our results enable a systems-level understanding of how deep neural networks function, and provide evidence of how neural networks reorganize edge weights and activity patterns so as to most effectively exploit the information theoretic content of input data during edge-weight training.SummaryTrained neural networks are capable of remarkable performance on complex categorization tasks, however the precise rules according to which the network reconfigures during training remain poorly understood. We used a combination of systems neuroscience and information theoretic analyses to interrogate the network topology of a simple, feed-forward network as it was trained on a digitclassification task. Over the course of training, the hidden layers of the network reconfigured in characteristic ways that were reminiscent of key results in network neuroscience studies of human brain imaging. In addition, we observed a strong correspondence between the topological changes at different learning phases and information theoretic signatures of the data that were entered into the network. In this way, we show how neural networks learn.


2020 ◽  
Vol 34 (03) ◽  
pp. 2501-2508 ◽  
Author(s):  
Woo-Jeoung Nam ◽  
Shir Gur ◽  
Jaesik Choi ◽  
Lior Wolf ◽  
Seong-Whan Lee

As Deep Neural Networks (DNNs) have demonstrated superhuman performance in a variety of fields, there is an increasing interest in understanding the complex internal mechanisms of DNNs. In this paper, we propose Relative Attributing Propagation (RAP), which decomposes the output predictions of DNNs with a new perspective of separating the relevant (positive) and irrelevant (negative) attributions according to the relative influence between the layers. The relevance of each neuron is identified with respect to its degree of contribution, separated into positive and negative, while preserving the conservation rule. Considering the relevance assigned to neurons in terms of relative priority, RAP allows each neuron to be assigned with a bi-polar importance score concerning the output: from highly relevant to highly irrelevant. Therefore, our method makes it possible to interpret DNNs with much clearer and attentive visualizations of the separated attributions than the conventional explaining methods. To verify that the attributions propagated by RAP correctly account for each meaning, we utilize the evaluation metrics: (i) Outside-inside relevance ratio, (ii) Segmentation mIOU and (iii) Region perturbation. In all experiments and metrics, we present a sizable gap in comparison to the existing literature.


Author(s):  
Alex Hernández-García ◽  
Johannes Mehrer ◽  
Nikolaus Kriegeskorte ◽  
Peter König ◽  
Tim C. Kietzmann

2018 ◽  
Author(s):  
Chi Zhang ◽  
Xiaohan Duan ◽  
Ruyuan Zhang ◽  
Li Tong

Sign in / Sign up

Export Citation Format

Share Document