scholarly journals A Steel Surface Defect Recognition Algorithm Based on Improved Deep Learning Network Model Using Feature Visualization and Quality Evaluation

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 49885-49895 ◽  
Author(s):  
Shengqi Guan ◽  
Ming Lei ◽  
Hao Lu
2021 ◽  
Vol 261 ◽  
pp. 01021
Author(s):  
Jiwei Li ◽  
Linsheng Li ◽  
Changlu Xu

In the field of defect recognition, deep learning technology has the advantages of strong generalization and high accuracy compared with mainstream machine learning technology. This paper proposes a deep learning network model, which first processes the self-made 3, 600 data sets, and then sends them to the built convolutional neural network model for training. The final result can effectively identify the three defects of lithium battery pole pieces. The accuracy rate is 92%. Compared with the structure of the AlexNet model, the model proposed in this paper has higher accuracy.


Author(s):  
Liyong Chen ◽  
Xiuye Yin

In order to solve the problem that individual coordinates are easily ignored in the localization of abnormal behavior of marine fish, resulting in low recognition accuracy, execution efficiency and high false alarm rate, this paper proposes a method of fish abnormal behavior recognition based on deep learning network model. Firstly, the shadow of the fish behavior data is removed, and the background image is subtracted from each frame image to get the gray image of the fish school. Then, the label watershed algorithm is used to identify the fish, and the coordinates of different individuals in the fish swarm are obtained. Combined with the experimental size constraints and the number of fish, and combined with the deep learning network model, the weak link of video tag monitoring of abnormal behavior of marine fish is analyzed. Finally, the multi instance learning method and dual flow network model are used to identify the anomaly of marine fish school. The experimental results show that the method has high recognition accuracy, low false alarm rate and high execution efficiency. This method can provide a practical reference for the related research in this field.


2021 ◽  
Vol 13 (3) ◽  
pp. 504
Author(s):  
Wanting Yang ◽  
Xianfeng Zhang ◽  
Peng Luo

The collapse of buildings caused by earthquakes can lead to a large loss of life and property. Rapid assessment of building damage with remote sensing image data can support emergency rescues. However, current studies indicate that only a limited sample set can usually be obtained from remote sensing images immediately following an earthquake. Consequently, the difficulty in preparing sufficient training samples constrains the generalization of the model in the identification of earthquake-damaged buildings. To produce a deep learning network model with strong generalization, this study adjusted four Convolutional Neural Network (CNN) models for extracting damaged building information and compared their performance. A sample dataset of damaged buildings was constructed by using multiple disaster images retrieved from the xBD dataset. Using satellite and aerial remote sensing data obtained after the 2008 Wenchuan earthquake, we examined the geographic and data transferability of the deep network model pre-trained on the xBD dataset. The result shows that the network model pre-trained with samples generated from multiple disaster remote sensing images can extract accurately collapsed building information from satellite remote sensing data. Among the adjusted CNN models tested in the study, the adjusted DenseNet121 was the most robust. Transfer learning solved the problem of poor adaptability of the network model to remote sensing images acquired by different platforms and could identify disaster-damaged buildings properly. These results provide a solution to the rapid extraction of earthquake-damaged building information based on a deep learning network model.


In the last few years, Deep Learning is one of the top research areas in academia as well as in industry. Every industry is now looking for a deep learning-based solution to the problems in hand. As a researcher, learning “Deep Learning” through practical experiments will be a very challenging task. Particularly, training a deep learning network with huge amount of training data will make it impractical to do this on a normal desktop computer or laptop. Even a small-scale application in computer vision using deep learning techniques will require several days of training the deep network model on a very higher end Graphical Processing Unit (GPU) clusters or Tensor Processing Unit (TPU) clusters that makes impractical to do that research on a conventional laptop. In this work, we address the possibilities of training a deep learning network with an insignificantly small dataset. Here we mean “significantly small dataset’ as a dataset with only few images (<10) per class. Since we are going to design a prototype drone detection system which is a single class classification problem, we hereby try to train the deep learning network only with few drone images (2 images only). Our research question is: will it be possible to train a YOLO deep learning network model only with two images and achieve a descent detection accurate on a constrained test dataset of drones? This paper addresses that issue and our results prove that it is possible to train a deep learning network only with two images and achieve good performance under constrained application environments.


Sign in / Sign up

Export Citation Format

Share Document