scholarly journals Wind turbine tower detection using feature descriptors and deep learning

2020 ◽  
Vol 33 (1) ◽  
pp. 133-153 ◽  
Author(s):  
Fereshteh Abedini ◽  
Mahdi Bahaghighat ◽  
Misak S’hoyan

Wind Turbine Towers (WTTs) are the main structures of wind farms. They are costly devices that must be thoroughly inspected according to maintenance plans. Today, existence of machine vision techniques along with unmanned aerial vehicles (UAVs) enable fast, easy, and intelligent visual inspection of the structures. Our work is aimed towards developing a vision-based system to perform Nondestructive tests (NDTs) for wind turbines using UAVs. In order to navigate the flying machine toward the wind turbine tower and reliably land on it, the exact position of the wind turbine and its tower must be detected. We employ several strong computer vision approaches such as Scale-Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), Features from Accelerated Segment Test (FAST), Brute-Force, Fast Library for Approximate Nearest Neighbors (FLANN) to detect the WTT. Then, in order to increase the reliability of the system, we apply the ResNet, MobileNet, ShuffleNet, EffNet, and SqueezeNet pre-trained classifiers in order to verify whether a detected object is indeed a turbine tower or not. This intelligent monitoring system has auto navigation ability and can be used for future goals including intelligent fault diagnosis and maintenance purposes. The simulation results show the accuracy of the proposed model are 89.4% in WTT detection and 97.74% in verification (classification) problems.

2020 ◽  
Vol 10 (24) ◽  
pp. 8994
Author(s):  
Dong-Hwa Jang ◽  
Kyeong-Seok Kwon ◽  
Jung-Kon Kim ◽  
Ka-Young Yang ◽  
Jong-Bok Kim

Currently, invasive and external radio frequency identification (RFID) devices and pet tags are widely used for dog identification. However, social problems such as abandoning and losing dogs are constantly increasing. A more effective alternative to the existing identification method is required and the biometrics can be the alternative. This paper proposes an effective dog muzzle recognition method to identify individual dogs. The proposed method consists of preprocessing, feature extraction, matching, and postprocessing. For preprocessing, proposed resize and histogram equalization are used. For feature extraction algorithm, Scale Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), Binary Robust Invariant Scaling Keypoints (BRISK) and Oriented FAST, and Rotated BRIEF (ORB) are applied and compared. For matching, Fast Library for Approximate Nearest Neighbors (FLANN) is used for SIFT and SURF, and hamming distance are used for BRISK and ORB. For postprocessing, two techniques to reduce incorrect matches are proposed. The proposed method was evaluated with 55 dog muzzle pattern images acquired from 11 dogs and 990 images augmented by the image deformation (i.e., angle, illumination, noise, affine transform). The best Equal Error Rate (EER) of the proposed method was 0.35%, and ORB was the most appropriate for the dog muzzle pattern recognition.


Data ◽  
2018 ◽  
Vol 3 (4) ◽  
pp. 52 ◽  
Author(s):  
Oleksii Gorokhovatskyi ◽  
Volodymyr Gorokhovatskyi ◽  
Olena Peredrii

In this paper, we propose an investigation of the properties of structural image recognition methods in the cluster space of characteristic features. Recognition, which is based on key point descriptors like SIFT (Scale-invariant Feature Transform), SURF (Speeded Up Robust Features), ORB (Oriented FAST and Rotated BRIEF), etc., often relating to the search for corresponding descriptor values between an input image and all etalon images, which require many operations and time. Recognition of the previously quantized (clustered) sets of descriptor features is described. Clustering is performed across the complete set of etalon image descriptors and followed by screening, which allows for representation of each etalon image in vector form as a distribution of clusters. Due to such representations, the number of computation and comparison procedures, which are the core of the recognition process, might be reduced tens of times. Respectively, the preprocessing stage takes additional time for clustering. The implementation of the proposed approach was tested on the Leeds Butterfly dataset. The dependence of cluster amount on recognition performance and processing time was investigated. It was proven that recognition may be performed up to nine times faster with only a moderate decrease in quality recognition compared to searching for correspondences between all existing descriptors in etalon images and input one without quantization.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4922
Author(s):  
Like Cao ◽  
Jie Ling ◽  
Xiaohui Xiao

Noise appears in images captured by real cameras. This paper studies the influence of noise on monocular feature-based visual Simultaneous Localization and Mapping (SLAM). First, an open-source synthetic dataset with different noise levels is introduced in this paper. Then the images in the dataset are denoised using the Fast and Flexible Denoising convolutional neural Network (FFDNet); the matching performances of Scale Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF) and Oriented FAST and Rotated BRIEF (ORB) which are commonly used in feature-based SLAM are analyzed in comparison and the results show that ORB has a higher correct matching rate than that of SIFT and SURF, the denoised images have a higher correct matching rate than noisy images. Next, the Absolute Trajectory Error (ATE) of noisy and denoised sequences are evaluated on ORB-SLAM2 and the results show that the denoised sequences perform better than the noisy sequences at any noise level. Finally, the completely clean sequence in the dataset and the sequences in the KITTI dataset are denoised and compared with the original sequence through comprehensive experiments. For the clean sequence, the Root-Mean-Square Error (RMSE) of ATE after denoising has decreased by 16.75%; for KITTI sequences, 7 out of 10 sequences have lower RMSE than the original sequences. The results show that the denoised image can achieve higher accuracy in the monocular feature-based visual SLAM under certain conditions.


2019 ◽  
Author(s):  
Sreeram Rajesh ◽  
Shruti Konka ◽  
Sabareesh G.R

In wind farms, turbine structures exist very close to each other which usually enhances or reduces the pressure load on the surrounding structures. This phenomenon, termed as interference effect is therefore important from a design point of view. Provisions for considering interference effects while designing of structures are inadequate in wind loading codes and standards. The present paper investigates the interference factors for two wind turbines located in close vicinity and also compares the interference factor of a wind turbine tower with that of a rectangular building type structure. Some key observations made from the results of this study is the nature of variation of the interference factor for different wind incident angles when the structures are separated by varying distances. The wind interference factors are observed at different locations on the wind turbine tower and an attempt is made to categorize the various trends observed.


2012 ◽  
Vol 263-266 ◽  
pp. 2418-2421
Author(s):  
Sheng Ke Wang ◽  
Lili Liu ◽  
Xiaowei Xu

In this paper, we present a comparison of the scale-invariant feature transforms (SIFT)-based feature-matching scheme and the speeded up robust features (SURF)-based feature-matching scheme in the field of vehicle logo recognition. We capture a set of logo images which are varied in illumination, blur, scale, and rotation. Six kinds of vehicle logo training set are formed using 25 images in average and the rest images are used to form the testing set. The Logo Recognition system that we programmed indicates a high recognition rate of the same kind of query images through adjusting different parameters.


2017 ◽  
Vol 8 (4) ◽  
pp. 45-58 ◽  
Author(s):  
Mohammed Amin Belarbi ◽  
Saïd Mahmoudi ◽  
Ghalem Belalem

Dimensionality reduction in large-scale image research plays an important role for their performance in different applications. In this paper, we explore Principal Component Analysis (PCA) as a dimensionality reduction method. For this purpose, first, the Scale Invariant Feature Transform (SIFT) features and Speeded Up Robust Features (SURF) are extracted as image features. Second, the PCA is applied to reduce the dimensions of SIFT and SURF feature descriptors. By comparing multiple sets of experimental data with different image databases, we have concluded that PCA with a reduction in the range, can effectively reduce the computational cost of image features, and maintain the high retrieval performance as well


2020 ◽  
Vol 2020 ◽  
pp. 1-13 ◽  
Author(s):  
Xiuxia Feng ◽  
Guangwei Cai ◽  
Xiaofang Gou ◽  
Zhaoqiang Yun ◽  
Wenhui Wang ◽  
...  

Mosaicking of retinal images is potentially useful for ophthalmologists and computer-aided diagnostic schemes. Vascular bifurcations can be used as features for matching and stitching of retinal images. A fully convolutional network model is employed to segment vascular structures in retinal images to detect vascular bifurcations. Then, bifurcations are extracted as feature points on the vascular mask by a robust and efficient approach. Transformation parameters for stitching can be estimated from the correspondence of vascular bifurcations. The proposed feature detection and mosaic method is evaluated on retinal images of 14 different eyes, 62 retinal images. The proposed method achieves a considerably higher average recall rate of matching for paired images compared with speeded-up robust features and scale-invariant feature transform. The running time of our method was also lower than other methods. Results produced by the proposed method superior to that of AutoStitch, photomerge function in Photoshop cs6 and ICE, demonstrate that accurate matching of detected vascular bifurcations could lead to high-quality mosaic of retinal images.


Sign in / Sign up

Export Citation Format

Share Document