scholarly journals Speeding up quantum perceptron via shortcuts to adiabaticity

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yue Ban ◽  
Xi Chen ◽  
E. Torrontegui ◽  
E. Solano ◽  
J. Casanova

AbstractThe quantum perceptron is a fundamental building block for quantum machine learning. This is a multidisciplinary field that incorporates abilities of quantum computing, such as state superposition and entanglement, to classical machine learning schemes. Motivated by the techniques of shortcuts to adiabaticity, we propose a speed-up quantum perceptron where a control field on the perceptron is inversely engineered leading to a rapid nonlinear response with a sigmoid activation function. This results in faster overall perceptron performance compared to quasi-adiabatic protocols, as well as in enhanced robustness against imperfections in the controls.

2021 ◽  
Vol 4 ◽  
Author(s):  
Przemysław Juda ◽  
Philippe Renard

In hydrogeology, inverse techniques have become indispensable to characterize subsurface parameters and their uncertainty. When modeling heterogeneous, geologically realistic discrete model spaces, such as categorical fields, Monte Carlo methods are needed to properly sample the solution space. Inversion algorithms use a forward operator, such as a numerical groundwater solver. The forward operator often represents the bottleneck for the high computational cost of the Monte Carlo sampling schemes. Even if efficient sampling methods (for example Posterior Population Expansion, PoPEx) have been developed, they need significant computing resources. It is therefore desirable to speed up such methods. As only a few models generated by the sampler have a significant likelihood, we propose to predict the significance of generated models by means of machine learning. Only models labeled as significant are passed to the forward solver, otherwise, they are rejected. This work compares the performance of AdaBoost, Random Forest, and convolutional neural network as classifiers integrated with the PoPEx framework. During initial iterations of the algorithm, the forward solver is always executed and subsurface models along with the likelihoods are stored. Then, the machine learning schemes are trained on the available data. We demonstrate the technique using a simulation of a tracer test in a fluvial aquifer. The geology is modeled by the multiple-point statistical approach, the field contains four geological facies, with associated permeability, porosity, and specific storage values. MODFLOW is used for groundwater flow and transport simulation. The solution of the inverse problem is used to estimate the 10 days protection zone around the pumping well. The estimated speed-ups with Random Forest and AdaBoost were higher than with the convolutional neural network. To validate the approach, computing times of inversion without and with machine learning schemes were computed and the error against the reference solution was calculated. For the same mean error, accelerated PoPEx achieved a speed-up rate of up to 2 with respect to the standard PoPEx.


2019 ◽  
Vol 2019 ◽  
pp. 1-8 ◽  
Author(s):  
Peng Wang ◽  
Xiaomin Zhang ◽  
Yan Hao

Due to the large number of Sigmoid activation function derivation in the traditional convolution neural network (CNN), it is difficult to solve the question of the low efficiency of extracting the feature of Synthetic Aperture Radar (SAR) images. The Sigmoid activation function in the CNN is improved to be a rectified linear unit (ReLU) activation function, and the classifier is modified by the Extreme Learning Machine (ELM). Finally, in this CNN model, the improved CNN works as the feature extractor and ELM performs as a recognizer. A SAR image recognition algorithm based on the CNN-ELM algorithm is proposed by combining the CNN and the ELM algorithm. The experiment is conducted on the Moving and Stationary Target Acquisition and Recognition (MSTAR) database which contains 10 kinds of target images. The experiment result shows that the algorithm can realize the sparsity of the network, alleviate the overfitting problem, and speed up the convergence speed of the network. It is worth mentioning that the running time of this experiment is very short. Compared with other experiment on the same database, it indicates that this experiment has generated a higher recognition rate. The accuracy of the SAR image recognition is 100%.


2020 ◽  
Vol 2020 (10) ◽  
pp. 54-62
Author(s):  
Oleksii VASYLIEV ◽  

The problem of applying neural networks to calculate ratings used in banking in the decision-making process on granting or not granting loans to borrowers is considered. The task is to determine the rating function of the borrower based on a set of statistical data on the effectiveness of loans provided by the bank. When constructing a regression model to calculate the rating function, it is necessary to know its general form. If so, the task is to calculate the parameters that are included in the expression for the rating function. In contrast to this approach, in the case of using neural networks, there is no need to specify the general form for the rating function. Instead, certain neural network architecture is chosen and parameters are calculated for it on the basis of statistical data. Importantly, the same neural network architecture can be used to process different sets of statistical data. The disadvantages of using neural networks include the need to calculate a large number of parameters. There is also no universal algorithm that would determine the optimal neural network architecture. As an example of the use of neural networks to determine the borrower's rating, a model system is considered, in which the borrower's rating is determined by a known non-analytical rating function. A neural network with two inner layers, which contain, respectively, three and two neurons and have a sigmoid activation function, is used for modeling. It is shown that the use of the neural network allows restoring the borrower's rating function with quite acceptable accuracy.


2019 ◽  
Vol 20 (5) ◽  
pp. 540-550 ◽  
Author(s):  
Jiu-Xin Tan ◽  
Hao Lv ◽  
Fang Wang ◽  
Fu-Ying Dao ◽  
Wei Chen ◽  
...  

Enzymes are proteins that act as biological catalysts to speed up cellular biochemical processes. According to their main Enzyme Commission (EC) numbers, enzymes are divided into six categories: EC-1: oxidoreductase; EC-2: transferase; EC-3: hydrolase; EC-4: lyase; EC-5: isomerase and EC-6: synthetase. Different enzymes have different biological functions and acting objects. Therefore, knowing which family an enzyme belongs to can help infer its catalytic mechanism and provide information about the relevant biological function. With the large amount of protein sequences influxing into databanks in the post-genomics age, the annotation of the family for an enzyme is very important. Since the experimental methods are cost ineffective, bioinformatics tool will be a great help for accurately classifying the family of the enzymes. In this review, we summarized the application of machine learning methods in the prediction of enzyme family from different aspects. We hope that this review will provide insights and inspirations for the researches on enzyme family classification.


2021 ◽  
Vol 103 (4) ◽  
Author(s):  
Haoran Liao ◽  
Ian Convy ◽  
William J. Huggins ◽  
K. Birgitta Whaley

2021 ◽  
Vol 11 (15) ◽  
pp. 6704
Author(s):  
Jingyong Cai ◽  
Masashi Takemoto ◽  
Yuming Qiu ◽  
Hironori Nakajo

Despite being heavily used in the training of deep neural networks (DNNs), multipliers are resource-intensive and insufficient in many different scenarios. Previous discoveries have revealed the superiority when activation functions, such as the sigmoid, are calculated by shift-and-add operations, although they fail to remove multiplications in training altogether. In this paper, we propose an innovative approach that can convert all multiplications in the forward and backward inferences of DNNs into shift-and-add operations. Because the model parameters and backpropagated errors of a large DNN model are typically clustered around zero, these values can be approximated by their sine values. Multiplications between the weights and error signals are transferred to multiplications of their sine values, which are replaceable with simpler operations with the help of the product to sum formula. In addition, a rectified sine activation function is utilized for further converting layer inputs into sine values. In this way, the original multiplication-intensive operations can be computed through simple add-and-shift operations. This trigonometric approximation method provides an efficient training and inference alternative for devices with insufficient hardware multipliers. Experimental results demonstrate that this method is able to obtain a performance close to that of classical training algorithms. The approach we propose sheds new light on future hardware customization research for machine learning.


Entropy ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. 460
Author(s):  
Samuel Yen-Chi Chen ◽  
Shinjae Yoo

Distributed training across several quantum computers could significantly improve the training time and if we could share the learned model, not the data, it could potentially improve the data privacy as the training would happen where the data is located. One of the potential schemes to achieve this property is the federated learning (FL), which consists of several clients or local nodes learning on their own data and a central node to aggregate the models collected from those local nodes. However, to the best of our knowledge, no work has been done in quantum machine learning (QML) in federation setting yet. In this work, we present the federated training on hybrid quantum-classical machine learning models although our framework could be generalized to pure quantum machine learning model. Specifically, we consider the quantum neural network (QNN) coupled with classical pre-trained convolutional model. Our distributed federated learning scheme demonstrated almost the same level of trained model accuracies and yet significantly faster distributed training. It demonstrates a promising future research direction for scaling and privacy aspects.


Author(s):  
G. Arunakranthi ◽  
B. Rajkumar ◽  
V. Chandra Shekhar Rao ◽  
A. Harshavardhan

2021 ◽  
Author(s):  
José D. Martín-Guerrero ◽  
Lucas Lamata

Photonics ◽  
2021 ◽  
Vol 8 (2) ◽  
pp. 33
Author(s):  
Lucas Lamata

Quantum machine learning has emerged as a promising paradigm that could accelerate machine learning calculations. Inside this field, quantum reinforcement learning aims at designing and building quantum agents that may exchange information with their environment and adapt to it, with the aim of achieving some goal. Different quantum platforms have been considered for quantum machine learning and specifically for quantum reinforcement learning. Here, we review the field of quantum reinforcement learning and its implementation with quantum photonics. This quantum technology may enhance quantum computation and communication, as well as machine learning, via the fruitful marriage between these previously unrelated fields.


Sign in / Sign up

Export Citation Format

Share Document