scholarly journals APPLICATION OF COMPUTER VISION TECHNOLOGIES FOR THE DEVELOPMENT OF A MODEL FOR THE RECOGNITION OF LESIONS OF CULTIVATED PLANTS

Author(s):  
N.A. Yanishevskaya ◽  
◽  
I.P. Bolodurina ◽  

In the Russian Federation, the agro-industrial complex is one of the leading sectors of the eco-nomy with a volume of domestic product of 4.5%. Russia owns 10 % of all arable land in the world. According to the data on the sown areas by crops in 2020, most of the agricultural area of Russia is occupied by wheat. The Russian Federation ranks third in the ranking of leading countries in the production of this type of grain crops, as well as leading positions in its export. Brown (leaf) and linear (stem) rust is the most harmful disease of grain crops. It is the reason for the sparseness of wheat crops and leads to a sharp decrease in yield. Therefore, one of the main tasks of farmers is to preserve the crop from diseases. The application of such areas of artificial intelligence as computer vision, machine learning and deep learning is able to cope with this task. These artificial intelligence technologies allow us to successfully solve applied problems of the agro-industrial complex using automated analysis of photographic materials. Aim. To consider the application of computer vision methods for the problem of classification of lesions of cultivated plants on the example of wheat. Materials and methods. The CGIAR Computer Vision for Crop Disease dataset for the crop disease recognition task is taken from the open source Kaggle. It is proposed to use an approach to the re-cognition of lesions of cultivated plants using the well-known neural network models ResNet50, DenseNet169, VGG16 and EfficientNet-B0. Neural network models receive images of wheat as in-put. The output of neural networks is the class of plant damage. To overcome the effect of overfit-ting neural networks, various regularization techniques are investigated. Results. The results of the classification quality, estimated by the software using the F1-score metric, which is the average harmonic between the Precision and Recall measures, are presented. Conclusion. As a result of the conducted research, it was found that the DenseNet model showed the best recognition accuracy us-ing a combination of transfer learning technology and DropOut and L2 regulation technologies to overcome the effect of retraining. The use of this approach allowed us to achieve a recognition ac-curacy of 91%.

10.14311/1121 ◽  
2009 ◽  
Vol 49 (2) ◽  
Author(s):  
M. Chvalina

This article analyses the existing possibilities for using Standard Statistical Methods and Artificial Intelligence Methods for a short-term forecast and simulation of demand in the field of telecommunications. The most widespread methods are based on Time Series Analysis. Nowadays, approaches based on Artificial Intelligence Methods, including Neural Networks, are booming. Separate approaches will be used in the study of Demand Modelling in Telecommunications, and the results of these models will be compared with actual guaranteed values. Then we will examine the quality of Neural Network models. 


Author(s):  
Rajesh Sai K. ◽  
Veneela Adapa ◽  
Hari Kishan Kondaveeti

Unknowingly, artificial intelligence (AI) has become an inevitable part of our lives. In this chapter, the authors discuss how the neural networks, a sub-part of AI, changed the way we analyse things. In this chapter, the advent of neural networks, inspiration from the human brain, simplification models of biological neuron models are discussed. Later, a detailed overview of various neural network models, their strengths, limitations, applications, and challenges are presented in detail.


2019 ◽  
Author(s):  
Courtney J Spoerer ◽  
Tim C Kietzmann ◽  
Johannes Mehrer ◽  
Ian Charest ◽  
Nikolaus Kriegeskorte

AbstractDeep feedforward neural network models of vision dominate in both computational neuroscience and engineering. The primate visual system, by contrast, contains abundant recurrent connections. Recurrent signal flow enables recycling of limited computational resources over time, and so might boost the performance of a physically finite brain or model. Here we show: (1) Recurrent convolutional neural network models outperform feedforward convolutional models matched in their number of parameters in large-scale visual recognition tasks on natural images. (2) Setting a confidence threshold, at which recurrent computations terminate and a decision is made, enables flexible trading of speed for accuracy. At a given confidence threshold, the model expends more time and energy on images that are harder to recognise, without requiring additional parameters for deeper computations. (3) The recurrent model’s reaction time for an image predicts the human reaction time for the same image better than several parameter-matched and state-of-the-art feedforward models. (4) Across confidence thresholds, the recurrent model emulates the behaviour of feedforward control models in that it achieves the same accuracy at approximately the same computational cost (mean number of floating-point operations). However, the recurrent model can be run longer (higher confidence threshold) and then outperforms parameter-matched feedforward comparison models. These results suggest that recurrent connectivity, a hallmark of biological visual systems, may be essential for understanding the accuracy, flexibility, and dynamics of human visual recognition.Author summaryDeep neural networks provide the best current models of biological vision and achieve the highest performance in computer vision. Inspired by the primate brain, these models transform the image signals through a sequence of stages, leading to recognition. Unlike brains in which outputs of a given computation are fed back into the same computation, these models do not process signals recurrently. The ability to recycle limited neural resources by processing information recurrently could explain the accuracy and flexibility of biological visual systems, which computer vision systems cannot yet match. Here we report that recurrent processing can improve recognition performance compared to similarly complex feedforward networks. Recurrent processing also enabled models to behave more flexibly and trade off speed for accuracy. Like humans, the recurrent network models can compute longer when an object is hard to recognise, which boosts their accuracy. The model’s recognition times predicted human recognition times for the same images. The performance and flexibility of recurrent neural network models illustrates that modeling biological vision can help us improve computer vision.


2021 ◽  
Author(s):  
Kanimozhi V ◽  
T. Prem Jacob

Abstract Although there exist various strategies for IoT Intrusion Detection, this research article sheds light on the aspect of how the application of top 10 Artificial Intelligence - Deep Learning Models can be useful for both supervised and unsupervised learning related to the IoT network traffic data. It pictures the detailed comparative analysis for IoT Anomaly Detection on sensible IoT gadgets that are instrumental in detecting IoT anomalies by the usage of the latest dataset IoT-23. Many strategies are being developed for securing the IoT networks, but still, development can be mandated. IoT security can be improved by the usage of various deep learning methods. This exploration has examined the top 10 deep-learning techniques, as the realistic IoT-23 dataset for improving the security execution of IoT network traffic. We built up various neural network models for identifying 5 kinds of IoT attack classes such as Mirai, Denial of Service (DoS), Scan, Man in the Middle attack (MITM-ARP), and Normal records. These attacks can be detected by using a "softmax" function of multiclass classification in deep-learning neural network models. This research was implemented in the Anaconda3 environment with different packages such as Pandas, NumPy, Scipy, Scikit-learn, TensorFlow 2.2, Matplotlib, and Seaborn. The utilization of AI-deep learning models embraced various domains like healthcare, banking and finance, findings and scientific researches, and the business organizations along with the concepts like the Internet of Things. We found that the top 10 deep-learning models are capable of increasing the accuracy; minimize the loss functions and the execution time for building that specific model. It contributes a major significance to IoT anomaly detection by using emerging technologies Artificial Intelligence and Deep Learning Neural Networks. Hence the alleviation of assaults that happen on an IoT organization will be effective. Among the top 10 neural networks, Convolutional neural networks, Multilayer perceptron, and Generative Adversarial Networks (GANs) output the highest accuracy scores of 0.996317, 0.996157, and 0.995829 with minimized loss function and less time pertain to the execution. This article added to completely grasp the quirks of irregularity identification of IoT anomalies. Henceforth, this research analysis depicts the implementations of the Top 10 AI-deep learning models, which come in handy that assist you to perceive different neural network models and IoT anomaly detection better.


2018 ◽  
Vol 6 (11) ◽  
pp. 216-216 ◽  
Author(s):  
Zhongheng Zhang ◽  
◽  
Marcus W. Beck ◽  
David A. Winkler ◽  
Bin Huang ◽  
...  

2021 ◽  
Vol 1 (1) ◽  
pp. 19-29
Author(s):  
Zhe Chu ◽  
Mengkai Hu ◽  
Xiangyu Chen

Recently, deep learning has been successfully applied to robotic grasp detection. Based on convolutional neural networks (CNNs), there have been lots of end-to-end detection approaches. But end-to-end approaches have strict requirements for the dataset used for training the neural network models and it’s hard to achieve in practical use. Therefore, we proposed a two-stage approach using particle swarm optimizer (PSO) candidate estimator and CNN to detect the most likely grasp. Our approach achieved an accuracy of 92.8% on the Cornell Grasp Dataset, which leaped into the front ranks of the existing approaches and is able to run at real-time speeds. After a small change of the approach, we can predict multiple grasps per object in the meantime so that an object can be grasped in a variety of ways.


Author(s):  
Ming Zhang

Real world financial data is often discontinuous and non-smooth. Accuracy will be a problem, if we attempt to use neural networks to simulate such functions. Neural network group models can perform this function with more accuracy. Both Polynomial Higher Order Neural Network Group (PHONNG) and Trigonometric polynomial Higher Order Neural Network Group (THONNG) models are studied in this chapter. These PHONNG and THONNG models are open box, convergent models capable of approximating any kind of piecewise continuous function to any degree of accuracy. Moreover, they are capable of handling higher frequency, higher order nonlinear, and discontinuous data. Results obtained using Polynomial Higher Order Neural Network Group and Trigonometric polynomial Higher Order Neural Network Group financial simulators are presented, which confirm that PHONNG and THONNG group models converge without difficulty, and are considerably more accurate (0.7542% - 1.0715%) than neural network models such as using Polynomial Higher Order Neural Network (PHONN) and Trigonometric polynomial Higher Order Neural Network (THONN) models.


Author(s):  
Joarder Kamruzzaman ◽  
Ruhul Sarker

The primary aim of this chapter is to present an overview of the artificial neural network basics and operation, architectures, and the major algorithms used for training the neural network models. As can be seen in subsequent chapters, neural networks have made many useful contributions to solve theoretical and practical problems in finance and manufacturing areas. The secondary aim here is therefore to provide a brief review of artificial neural network applications in finance and manufacturing areas.


This chapter develops two new nonlinear artificial higher order neural network models. They are sine and sine higher order neural networks (SIN-HONN) and cosine and cosine higher order neural networks (COS-HONN). Financial data prediction using SIN-HONN and COS-HONN models are tested. Results show that SIN-HONN and COS-HONN models are good models for some sine feature only or cosine feature only financial data simulation and prediction compared with polynomial higher order neural network (PHONN) and trigonometric higher order neural network (THONN) models.


Sign in / Sign up

Export Citation Format

Share Document