scholarly journals Deep Convolutional Neural Networks for Tea Tree Pest Recognition and Diagnosis

Symmetry ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 2140
Author(s):  
Jing Chen ◽  
Qi Liu ◽  
Lingwang Gao

Due to the benefits of convolutional neural networks (CNNs) in image classification, they have been extensively used in the computerized classification and focus of crop pests. The intention of the current find out about is to advance a deep convolutional neural network to mechanically identify 14 species of tea pests that possess symmetry properties. (1) As there are not enough tea pests images in the network to train the deep convolutional neural network, we proposes to classify tea pests images by fine-tuning the VGGNET-16 deep convolutional neural network. (2) Through comparison with traditional machine learning algorithms Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP), the performance of our method is evaluated (3) The three methods can identify tea tree pests well: the proposed convolutional neural network classification has accuracy up to 97.75%, while MLP and SVM have accuracies of 76.07% and 68.81%, respectively. Our proposed method performs the best of the assessed recognition algorithms. The experimental results also show that the fine-tuning method is a very powerful and efficient tool for small datasets in practical problems.

2019 ◽  
Vol 8 (4) ◽  
pp. 160 ◽  
Author(s):  
Bingxin Liu ◽  
Ying Li ◽  
Guannan Li ◽  
Anling Liu

Spectral characteristics play an important role in the classification of oil film, but the presence of too many bands can lead to information redundancy and reduced classification accuracy. In this study, a classification model that combines spectral indices-based band selection (SIs) and one-dimensional convolutional neural networks was proposed to realize automatic oil films classification using hyperspectral remote sensing images. Additionally, for comparison, the minimum Redundancy Maximum Relevance (mRMR) was tested for reducing the number of bands. The support vector machine (SVM), random forest (RF), and Hu’s convolutional neural networks (CNN) were trained and tested. The results show that the accuracy of classifications through the one dimensional convolutional neural network (1D CNN) models surpassed the accuracy of other machine learning algorithms such as SVM and RF. The model of SIs+1D CNN could produce a relatively higher accuracy oil film distribution map within less time than other models.


Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2393 ◽  
Author(s):  
Daniel Octavian Melinte ◽  
Luige Vladareanu

The interaction between humans and an NAO robot using deep convolutional neural networks (CNN) is presented in this paper based on an innovative end-to-end pipeline method that applies two optimized CNNs, one for face recognition (FR) and another one for the facial expression recognition (FER) in order to obtain real-time inference speed for the entire process. Two different models for FR are considered, one known to be very accurate, but has low inference speed (faster region-based convolutional neural network), and one that is not as accurate but has high inference speed (single shot detector convolutional neural network). For emotion recognition transfer learning and fine-tuning of three CNN models (VGG, Inception V3 and ResNet) has been used. The overall results show that single shot detector convolutional neural network (SSD CNN) and faster region-based convolutional neural network (Faster R-CNN) models for face detection share almost the same accuracy: 97.8% for Faster R-CNN on PASCAL visual object classes (PASCAL VOCs) evaluation metrics and 97.42% for SSD Inception. In terms of FER, ResNet obtained the highest training accuracy (90.14%), while the visual geometry group (VGG) network had 87% accuracy and Inception V3 reached 81%. The results show improvements over 10% when using two serialized CNN, instead of using only the FER CNN, while the recent optimization model, called rectified adaptive moment optimization (RAdam), lead to a better generalization and accuracy improvement of 3%-4% on each emotion recognition CNN.


2018 ◽  
Author(s):  
Rollyn Labuguen (P) ◽  
Vishal Gaurav ◽  
Salvador Negrete Blanco ◽  
Jumpei Matsumoto ◽  
Kenichi Inoue ◽  
...  

AbstractUnderstanding animal behavior in its natural habitat is a challenging task. One of the primary step for analyzing animal behavior is feature detection. In this study, we propose the use of deep convolutional neural network (CNN) to locate monkey features from raw RGB images of monkey in its natural environment. We train the model to identify features such as the nose and shoulders of the monkey at about 0.01 model loss.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Emre Kiyak ◽  
Gulay Unal

Purpose The paper aims to address the tracking algorithm based on deep learning and four deep learning tracking models developed. They compared with each other to prevent collision and to obtain target tracking in autonomous aircraft. Design/methodology/approach First, to follow the visual target, the detection methods were used and then the tracking methods were examined. Here, four models (deep convolutional neural networks (DCNN), deep convolutional neural networks with fine-tuning (DCNNFN), transfer learning with deep convolutional neural network (TLDCNN) and fine-tuning deep convolutional neural network with transfer learning (FNDCNNTL)) were developed. Findings The training time of DCNN took 9 min 33 s, while the accuracy percentage was calculated as 84%. In DCNNFN, the training time of the network was calculated as 4 min 26 s and the accuracy percentage was 91%. The training of TLDCNN) took 34 min and 49 s and the accuracy percentage was calculated as 95%. With FNDCNNTL, the training time of the network was calculated as 34 min 33 s and the accuracy percentage was nearly 100%. Originality/value Compared to the results in the literature ranging from 89.4% to 95.6%, using FNDCNNTL, better results were found in the paper.


Mathematics ◽  
2020 ◽  
Vol 8 (6) ◽  
pp. 936 ◽  
Author(s):  
Nebojsa Bacanin ◽  
Timea Bezdan ◽  
Eva Tuba ◽  
Ivana Strumberger ◽  
Milan Tuba

Convolutional neural networks have a broad spectrum of practical applications in computer vision. Currently, much of the data come from images, and it is crucial to have an efficient technique for processing these large amounts of data. Convolutional neural networks have proven to be very successful in tackling image processing tasks. However, the design of a network structure for a given problem entails a fine-tuning of the hyperparameters in order to achieve better accuracy. This process takes much time and requires effort and expertise from the domain. Designing convolutional neural networks’ architecture represents a typical NP-hard optimization problem, and some frameworks for generating network structures for a specific image classification tasks have been proposed. To address this issue, in this paper, we propose the hybridized monarch butterfly optimization algorithm. Based on the observed deficiencies of the original monarch butterfly optimization approach, we performed hybridization with two other state-of-the-art swarm intelligence algorithms. The proposed hybrid algorithm was firstly tested on a set of standard unconstrained benchmark instances, and later on, it was adapted for a convolutional neural network design problem. Comparative analysis with other state-of-the-art methods and algorithms, as well as with the original monarch butterfly optimization implementation was performed for both groups of simulations. Experimental results proved that our proposed method managed to obtain higher classification accuracy than other approaches, the results of which were published in the modern computer science literature.


Author(s):  
R. Niessner ◽  
H. Schilling ◽  
B. Jutzi

In recent years, there has been a significant improvement in the detection, identification and classification of objects and images using Convolutional Neural Networks. To study the potential of the Convolutional Neural Network, in this paper three approaches are investigated to train classifiers based on Convolutional Neural Networks. These approaches allow Convolutional Neural Networks to be trained on datasets containing only a few hundred training samples, which results in a successful classification. Two of these approaches are based on the concept of transfer learning. In the first approach features, created by a pretrained Convolutional Neural Network, are used for a classification using a support vector machine. In the second approach a pretrained Convolutional Neural Network gets fine-tuned on a different data set. The third approach includes the design and training for flat Convolutional Neural Networks from the scratch. The evaluation of the proposed approaches is based on a data set provided by the IEEE Geoscience and Remote Sensing Society (GRSS) which contains RGB and LiDAR data of an urban area. In this work it is shown that these Convolutional Neural Networks lead to classification results with high accuracy both on RGB and LiDAR data. Features which are derived by RGB data transferred into LiDAR data by transfer learning lead to better results in classification in contrast to RGB data. Using a neural network which contains fewer layers than common neural networks leads to the best classification results. In this framework, it can furthermore be shown that the practical application of LiDAR images results in a better data basis for classification of vehicles than the use of RGB images.


2018 ◽  
Vol 14 (10) ◽  
pp. 155014771880218 ◽  
Author(s):  
Libin Jiao ◽  
Rongfang Bie ◽  
Hao Wu ◽  
Yu Wei ◽  
Jixin Ma ◽  
...  

The use of smart sports equipment and body sensory systems supervising daily sports training is gradually emerging in professional and amateur sports; however, the problem of processing large amounts of data from sensors used in sport and discovering constructive knowledge is a novel topic and the focus of our research. In this article, we investigate golf swing data classification methods based on varieties of representative convolutional neural networks (deep convolutional neural networks) which are fed with swing data from embedded multi-sensors, to group the multi-channel golf swing data labeled by hybrid categories from different golf players and swing shapes. In particular, four convolutional neural classifiers are customized: “GolfVanillaCNN” with the convolutional layers, “GolfVGG” with the stacked convolutional layers, “GolfInception” with the multi-scale convolutional layers, and “GolfResNet” with the residual learning. Testing on the real-world swing dataset sampled from the system integrating two strain gage sensors, three-axis accelerometer, and three-axis gyroscope, we explore the accuracy and performance of our convolutional neural network–based classifiers from two perspectives: classification implementations and sensor combinations. Besides, we further evaluate the performance of these four classifiers in terms of classification accuracy, precision–recall curves, and F1 scores. These common classification indicators illustrate that our convolutional neural network–based classifiers can basically group the golf swing predefined by the combination of shapes and golf players correctly and outperform support vector machine method representing traditional classification methods.


2020 ◽  
Vol 321 ◽  
pp. 11084
Author(s):  
Ryan Noraas ◽  
Vasisht Venkatesh ◽  
Luke Rettberg ◽  
Nagendra Somanath

Recent advances in machine learning and image recognition tools/methods are being used to address fundamental challenges in materials engineering, such as the automated extraction of statistical information from dual phase titanium alloy microstructure images to support rapid engineering decision making. Initially, this work was performed by extracting dense layer outputs from a pretrained convolutional neural network (CNN), running the high dimensional image vectors through a principal component analysis, and fitting a logistic regression model for image classification. Kfold cross validation results reported a mean validation accuracy of 83% over 19 different material pedigrees. Furthermore, it was shown that fine-tuning the pre-trained network was able to improve image classification accuracy by nearly 10% over the baseline. These image classification models were then used to determine and justify statistically equivalent representative volume elements (SERVE). Lastly, a convolutional neural network was trained and validated to make quantitative predictions from a synthetic and real, two-phase image datasets. This paper explores the application of convolutional neural networks for microstructure analysis in the context of aerospace engineering and material quality.


Author(s):  
Sachin B. Jadhav

<span lang="EN-US">Plant pathologists desire soft computing technology for accurate and reliable diagnosis of plant diseases. In this study, we propose an efficient soybean disease identification method based on a transfer learning approach by using a pre-trained convolutional neural network (CNN’s) such as AlexNet, GoogleNet, VGG16, ResNet101, and DensNet201. The proposed convolutional neural networks were trained using 1200 plant village image dataset of diseased and healthy soybean leaves, to identify three soybean diseases out of healthy leaves. Pre-trained CNN used to enable a fast and easy system implementation in practice. We used the five-fold cross-validation strategy to analyze the performance of networks. In this study, we used a pre-trained convolutional neural network as feature extractors and classifiers. The experimental results based on the proposed approach using pre-trained AlexNet, GoogleNet, VGG16, ResNet101, and DensNet201 networks achieve an accuracy of 95%, 96.4 %, 96.4 %, 92.1%, 93.6% respectively. The experimental results for the identification of soybean diseases indicated that the proposed networks model achieves the highest accuracy</span>


2021 ◽  
Author(s):  
Shima Baniadamdizaj ◽  
Mohammadreza Soheili ◽  
Azadeh Mansouri

Abstract Today integration of facts from virtual and paper files may be very vital for the expertise control of efficient. This calls for the record to be localized at the photograph. Several strategies had been proposed to resolve this trouble; however, they may be primarily based totally on conventional photograph processing strategies that aren't sturdy to intense viewpoints and backgrounds. Deep Convolutional Neural Networks (CNNs), on the opposite hand, have demonstrated to be extraordinarily sturdy to versions in history and viewing attitude for item detection and classification responsibilities. We endorse new utilization of Neural Networks (NNs) for the localization trouble as a localization trouble. The proposed technique ought to even localize photos that don't have a very square shape. Also, we used a newly accrued dataset that has extra tough responsibilities internal and is in the direction of a slipshod user. The end result knowledgeable in 3 exclusive classes of photos and our proposed technique has 83% on average. The end result is as compared with the maximum famous record localization strategies and cell applications.


Sign in / Sign up

Export Citation Format

Share Document