scholarly journals DRPnet - Automated Particle Picking in Cryo-Electron Micrographs using Deep Regression

2019 ◽  
Author(s):  
Nguyen P. Nguyen ◽  
Jacob Gotberg ◽  
Ilker Ersoy ◽  
Filiz Bunyak ◽  
Tommi White

AbstractSelection of individual protein particles in cryo-electron micrographs is an important step in single particle analysis. In this study, we developed a deep learning-based method to automatically detect particle centers from cryoEM micrographs. This is a challenging task because of the low signal-to-noise ratio of cryoEM micrographs and the size, shape, and grayscale-level variations in particles. We propose a double convolutional neural network (CNN) cascade for automated detection of particles in cryo-electron micrographs. Particles are detected by the first network, a fully convolutional regression network (FCRN), which maps the particle image to a continuous distance map that acts like a probability density function of particle centers. Particles identified by FCRN are further refined (or classified) to reduce false particle detections by the second CNN. This approach, entitled Deep Regression Picker Network or “DRPnet”, is simple but very effective in recognizing different grayscale patterns corresponding to 2D views of 3D particles. Our experiments showed that DRPnet’s first CNN pretrained with one dataset can be used to detect particles from a different datasets without retraining. The performance of this network can be further improved by re-training the network using specific particle datasets. The second network, a classification convolutional neural network, is used to refine detection results by identifying false detections. The proposed fully automated “deep regression” system, DRPnet, pretrained with TRPV1 (EMPIAR-10005) [1], and tested on β-galactosidase (EMPIAR-10017) [2] and β-galactosidase (EMPIAR-10061) [3], was then compared to RELION’s interactive particle picking. Preliminary experiments resulted in comparable or better particle picking performance with drastically reduced user interactions and improved processing time.

2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Nguyen Phuoc Nguyen ◽  
Ilker Ersoy ◽  
Jacob Gotberg ◽  
Filiz Bunyak ◽  
Tommi A. White

Abstract Background Identification and selection of protein particles in cryo-electron micrographs is an important step in single particle analysis. In this study, we developed a deep learning-based particle picking network to automatically detect particle centers from cryoEM micrographs. This is a challenging task due to the nature of cryoEM data, having low signal-to-noise ratios with variable particle sizes, shapes, distributions, grayscale variations as well as other undesirable artifacts. Results We propose a double convolutional neural network (CNN) cascade for automated detection of particles in cryo-electron micrographs. This approach, entitled Deep Regression Picker Network or “DRPnet”, is simple but very effective in recognizing different particle sizes, shapes, distributions and grayscale patterns corresponding to 2D views of 3D particles. Particles are detected by the first network, a fully convolutional regression network (FCRN), which maps the particle image to a continuous distance map that acts like a probability density function of particle centers. Particles identified by FCRN are further refined to reduce false particle detections by the second classification CNN. DRPnet’s first CNN pretrained with only a single cryoEM dataset can be used to detect particles from different datasets without retraining. Compared to RELION template-based autopicking, DRPnet results in better particle picking performance with drastically reduced user interactions and processing time. DRPnet also outperforms the state-of-the-art particle picking networks in terms of the supervised detection evaluation metrics recall, precision, and F-measure. To further highlight quality of the picked particle sets, we compute and present additional performance metrics assessing the resulting 3D reconstructions such as number of 2D class averages, efficiency/angular coverage, Rosenthal-Henderson plots and local/global 3D reconstruction resolution. Conclusion DRPnet shows greatly improved time-savings to generate an initial particle dataset compared to manual picking, followed by template-based autopicking. Compared to other networks, DRPnet has equivalent or better performance. DRPnet excels on cryoEM datasets that have low contrast or clumped particles. Evaluating other performance metrics, DRPnet is useful for higher resolution 3D reconstructions with decreased particle numbers or unknown symmetry, detecting particles with better angular orientation coverage.


2021 ◽  
Vol 11 (11) ◽  
pp. 5235
Author(s):  
Nikita Andriyanov

The article is devoted to the study of convolutional neural network inference in the task of image processing under the influence of visual attacks. Attacks of four different types were considered: simple, involving the addition of white Gaussian noise, impulse action on one pixel of an image, and attacks that change brightness values within a rectangular area. MNIST and Kaggle dogs vs. cats datasets were chosen. Recognition characteristics were obtained for the accuracy, depending on the number of images subjected to attacks and the types of attacks used in the training. The study was based on well-known convolutional neural network architectures used in pattern recognition tasks, such as VGG-16 and Inception_v3. The dependencies of the recognition accuracy on the parameters of visual attacks were obtained. Original methods were proposed to prevent visual attacks. Such methods are based on the selection of “incomprehensible” classes for the recognizer, and their subsequent correction based on neural network inference with reduced image sizes. As a result of applying these methods, gains in the accuracy metric by a factor of 1.3 were obtained after iteration by discarding incomprehensible images, and reducing the amount of uncertainty by 4–5% after iteration by applying the integration of the results of image analyses in reduced dimensions.


2018 ◽  
Vol 14 (10) ◽  
pp. 155014771880594 ◽  
Author(s):  
Xu Kang ◽  
Bin Song ◽  
Jie Guo ◽  
Xiaojiang Du ◽  
Mohsen Guizani

Vehicle tracking task plays an important role on the Internet of vehicles and intelligent transportation system. Beyond the traditional Global Positioning System sensor, the image sensor can capture different kinds of vehicles, analyze their driving situation, and can interact with them. Aiming at the problem that the traditional convolutional neural network is vulnerable to background interference, this article proposes vehicle tracking method based on human attention mechanism for self-selection of deep features with an inter-channel fully connected layer. It mainly includes the following contents: (1) a fully convolutional neural network fused attention mechanism with the selection of the deep features for convolution; (2) a separation method for template and semantic background region to separate target vehicles from the background in the initial frame adaptively; (3) a two-stage method for model training using our traffic dataset. The experimental results show that the proposed method improves the tracking accuracy without an increase in tracking time. Meanwhile, it strengthens the robustness of algorithm under the condition of the complex background region. The success rate of the proposed method in overall traffic datasets is higher than Siamese network by about 10%, and the overall precision is higher than Siamese network by 8%.


2019 ◽  
Vol 81 (5) ◽  
pp. 3283-3291 ◽  
Author(s):  
Naeim Bahrami ◽  
Tara Retson ◽  
Kevin Blansit ◽  
Kang Wang ◽  
Albert Hsiao

2019 ◽  
Vol 146 (4) ◽  
pp. 2961-2962
Author(s):  
Kira Howarth ◽  
David F. Van Komen ◽  
Tracianne B. Neilsen ◽  
David P. Knobles ◽  
Peter H. Dahl ◽  
...  

Author(s):  
Alfita Rakhmandasari ◽  
Wayan Firdaus Mahmudy ◽  
Titiek Yulianti

<span>Kenaf plant is a fibre plant whose stem bark is taken to be used as raw material for making geo-textile, particleboard, pulp, fiber drain, fiber board, and paper. The presence of plant pests and diseases that attack causes crop production to decrease. The detection of pests and diseases by farmers may be a challenging task. The detection can be done using artificial intelligence-based method. Convolutional neural networks (CNNs) are one of the most popular neural network architectures and have been successfully implemented for image classification. However, the CNN method is still considered a long time in the process, so this method was developed into namely faster regional based convolution neural network (RCNN). As the selection of the input features largely determines the accuracy of the results, a pre-processing procedure is developed to transform the kenaf plant image into input features of faster RCNN. A computational experiment proves that the faster RCNN has a very short computation time by completing 10000 iterations in 3 hours compared to convolutional neural network (CNN) completing 100 iterations at the same time. Furthermore, Faster RCNN gets 77.50% detection accuracy and bounding box accuracy 96.74% while CNN gets 72.96% detection accuracy at 400 epochs. The results also prove that the selection of input features and its pre-processing procedure could produce a high accuracy of detection. </span>


2020 ◽  
Vol 9 (4) ◽  
pp. 1
Author(s):  
Arman I. Mohammed ◽  
Ahmed AK. Tahir

A new optimization algorithm called Adam Meged with AMSgrad (AMAMSgrad) is modified and used for training a convolutional neural network type Wide Residual Neural Network, Wide ResNet (WRN), for image classification purpose. The modification includes the use of the second moment as in AMSgrad and the use of Adam updating rule but with and (2) as the power of the denominator. The main aim is to improve the performance of the AMAMSgrad optimizer by a proper selection of and the power of the denominator. The implementation of AMAMSgrad and the two known methods (Adam and AMSgrad) on the Wide ResNet using CIFAR-10 dataset for image classification reveals that WRN performs better with AMAMSgrad optimizer compared to its performance with Adam and AMSgrad optimizers. The accuracies of training, validation and testing are improved with AMAMSgrad over Adam and AMSgrad. AMAMSgrad needs less number of epochs to reach maximum performance compared to Adam and AMSgrad. With AMAMSgrad, the training accuracies are (90.45%, 97.79%, 99.98%, 99.99%) respectively at epoch (60, 120, 160, 200), while validation accuracy for the same epoch numbers are (84.89%, 91.53%, 95.05%, 95.23). For testing, the WRN with AMAMSgrad provided an overall accuracy of 94.8%. All these accuracies outrages those provided by WRN with Adam and AMSgrad. The classification metric measures indicate that the given architecture of WRN with the three optimizers performs significantly well and with high confidentiality, especially with AMAMSgrad optimizer.


Author(s):  
Asma Abdulelah Abdulrahman ◽  
Fouad Shaker Tahir

<p>In this work, it was proposed to compress the color image after de-noise by proposing a coding for the discrete transport of new wavelets called discrete chebysheve wavelet transduction (DCHWT) and linking it to a neural network that relies on the convolutional neural network to compress the color image. The aim of this work is to find an effective method for face recognition, which is to raise the noise and compress the image in convolutional neural networks to remove the noise that caused the image while it was being transmitted in the communication network. The work results of the algorithm were calculated by calculating the peak signal to noise ratio (PSNR), mean square error (MSE), compression ratio (CR) and bit-per-pixel (BPP) of the compressed image after a color image (256×256) was entered to demonstrate the quality and efficiency of the proposed algorithm in this work. The result obtained by using a convolutional neural network with new wavelets is to provide a better CR with the ratio of PSNR to be a high value that increases the high-quality ratio of the compressed image to be ready for face recognition.</p>


Sign in / Sign up

Export Citation Format

Share Document