scholarly journals Deep Feature Autoextraction Method for Intrapulse Data of Radar Emitter Signal

2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Shiqiang Wang ◽  
Caiyun Gao ◽  
Chang Luo ◽  
Huiyong Zeng ◽  
Guimei Zheng ◽  
...  

Concerned with the problems that the extracted features are the absence of objectivity for radar emitter signal intrapulse data because of relying on priori knowledge, a novel method is proposed. First, this method gets the sparse autoencoder by adding certain restrain to the autoencoder. Second, by optimizing the sparse autoencoder and confirming the training scheme, intrapulse deep features are autoextracted with encoder layer parameters. The method extracts the eigenvectors of six typical radar emitter signals and uses them as inputs to a support vector machine classifier. The experimental results show that the method has higher accuracy in the case of large signal-to-noise ratio. The simulation verifies that the extracted features are feasible.

Author(s):  
Nadia Smaoui Zghal ◽  
Marwa Zaabi ◽  
Houda Derbel

Aims: Skin cancer is a fairly critical disease all over the world and especially in Western countries and America. However, if it is perceived and treated early, it is quite often curable. The main risk factors for melanoma are exposure to UV rays, the presence of many moles, and heredity. For this reason, this work focuses on the issue of automatic diagnosis of melanoma. The aim is to extract significant features from pixels of the images based on an unsupervised deep learning technique which is the sparse autoencoder method. Methodology: A preprocessing phase is required to remove the artifacts and enhance the contrast of the images before proceeding with the feature extraction. Once the characteristics are extracted automatically, the support vector machine classifier and the k-nearest neighbors are applied for the classification phase. The objective is to differentiate between 3 categories: melanoma, suspected case, and non-melanoma. Finally, the PH2 database is used to test the proposed approaches (200 images are presented in this dataset: 80 atypical nevi, 80 common nevi, and 40 melanoma). Results: The obtained results in terms of specificity, accuracy, and sensitivity present noticeable performances with the support vector machine classifier (achieved 94 % overall accuracy) and the k-nearest neighbors (92 %). Conclusion: This study's experimental findings showed that the best performance was obtained by the approach based on a deep sparse autoencoder combined with support vector machine.


2019 ◽  
Vol 44 (3) ◽  
pp. 325-338 ◽  
Author(s):  
Congcong Hu ◽  
Roberto Albertani

The significant development of wind power generation worldwide brings, together with environmental benefits, wildlife concerns, especially for volant species vulnerability to interactions with wind energy facilities. For surveying such events, an automatic system for continuous monitoring of blade collisions is critical. An onboard multi-senor system capable of providing real-time collision detection using integrated vibration sensors is developed and successfully tested. However, to detect low signal-to-noise ratio impact can be challenging; hence, an advanced impact detection method has been developed and presented in this article. A robust automated detection algorithm based on support vector machine is proposed. After a preliminary signal pre-processing, geometric features specifically selected for their sensitivity to impact signals are extracted from raw vibration signal and energy distribution graph. The predictive model is formulated by training conventional support vector machine using extracted features for impact identification. Finally, the performance of the predictive model is evaluated by accuracy, precision, and recall. Results indicate a linear regression relationship between signal-to-noise ratio and model overall performance. The proposed method is much reliable on higher signal-to-noise ratio [Formula: see text], but it shows to be ineffective at lower signal-to-noise ratio [Formula: see text].


Information ◽  
2019 ◽  
Vol 10 (11) ◽  
pp. 338
Author(s):  
Arfan Haider Wahla ◽  
Lan Chen ◽  
Yali Wang ◽  
Rong Chen

Automatic Classification of Wireless Signals (ACWS), which is an intermediate step between signal detection and demodulation, is investigated in this paper. ACWS plays a crucial role in several military and non-military applications, by identifying interference sources and adversary attacks, to achieve efficient radio spectrum management. The performance of traditional feature-based (FB) classification approaches is limited due to their specific input feature set, which in turn results in poor generalization under unknown conditions. Therefore, in this paper, a novel feature-based classifier Neural-Induced Support Vector Machine (NSVM) is proposed, in which the features are learned automatically from raw input signals using Convolutional Neural Networks (CNN). The output of NSVM is given by a Gaussian Support Vector Machine (SVM), which takes the features learned by CNN as its input. The proposed scheme NSVM is trained as a single architecture, and in this way, it learns to minimize a margin-based loss instead of cross-entropy loss. The proposed scheme NSVM outperforms the traditional softmax-based CNN modulation classifier by managing faster convergence of accuracy and loss curves during training. Furthermore, the robustness of the NSVM classifier is verified by extensive simulation experiments under the presence of several non-ideal real-world channel impairments over a range of signal-to-noise ratio (SNR) values. The performance of NSVM is remarkable in classifying wireless signals, such as at low signal-to-noise ratio (SNR), the overall averaged classification accuracy is > 97% at SNR = −2 dB and at higher SNR it achieves overall classification accuracy at > 99%, when SNR = 10 dB. In addition to that, the analytical comparison with other studies shows the performance of NSVM is superior over a range of settings.


2020 ◽  
Vol 5 (2) ◽  
pp. 504
Author(s):  
Matthias Omotayo Oladele ◽  
Temilola Morufat Adepoju ◽  
Olaide ` Abiodun Olatoke ◽  
Oluwaseun Adewale Ojo

Yorùbá language is one of the three main languages that is been spoken in Nigeria. It is a tonal language that carries an accent on the vowel alphabets. There are twenty-five (25) alphabets in Yorùbá language with one of the alphabets a digraph (GB). Due to the difficulty in typing handwritten Yorùbá documents, there is a need to develop a handwritten recognition system that can convert the handwritten texts to digital format. This study discusses the offline Yorùbá handwritten word recognition system (OYHWR) that recognizes Yorùbá uppercase alphabets. Handwritten characters and words were obtained from different writers using the paint application and M708 graphics tablets. The characters were used for training and the words were used for testing. Pre-processing was done on the images and the geometric features of the images were extracted using zoning and gradient-based feature extraction. Geometric features are the different line types that form a particular character such as the vertical, horizontal, and diagonal lines. The geometric features used are the number of horizontal lines, number of vertical lines, number of right diagonal lines, number of left diagonal lines, total length of all horizontal lines, total length of all vertical lines, total length of all right slanting lines, total length of all left-slanting lines and the area of the skeleton. The characters are divided into 9 zones and gradient feature extraction was used to extract the horizontal and vertical components and geometric features in each zone. The words were fed into the support vector machine classifier and the performance was evaluated based on recognition accuracy. Support vector machine is a two-class classifier, hence a multiclass SVM classifier least square support vector machine (LSSVM) was used for word recognition. The one vs one strategy and RBF kernel were used and the recognition accuracy obtained from the tested words ranges between 66.7%, 83.3%, 85.7%, 87.5%, and 100%. The low recognition rate for some of the words could be as a result of the similarity in the extracted features.


Animals ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 1485
Author(s):  
Kaidong Lei ◽  
Chao Zong ◽  
Xiaodong Du ◽  
Guanghui Teng ◽  
Feiqi Feng

This study proposes a method and device for the intelligent mobile monitoring of oestrus on a sow farm, applied in the field of sow production. A bionic boar model that imitates the sounds, smells, and touch of real boars was built to detect the oestrus of sows after weaning. Machine vision technology was used to identify the interactive behaviour between empty sows and bionic boars and to establish deep belief network (DBN), sparse autoencoder (SAE), and support vector machine (SVM) models, and the resulting recognition accuracy rates were 96.12%, 98.25%, and 90.00%, respectively. The interaction times and frequencies between the sow and the bionic boar and the static behaviours of both ears during heat were further analysed. The results show that there is a strong correlation between the duration of contact between the oestrus sow and the bionic boar and the static behaviours of both ears. The average contact duration between the sows in oestrus and the bionic boars was 29.7 s/3 min, and the average duration in which the ears of the oestrus sows remained static was 41.3 s/3 min. The interactions between the sow and the bionic boar were used as the basis for judging the sow’s oestrus states. In contrast with the methods of other studies, the proposed innovative design for recyclable bionic boars can be used to check emotions, and machine vision technology can be used to quickly identify oestrus behaviours. This approach can more accurately obtain the oestrus duration of a sow and provide a scientific reference for a sow’s conception time.


Sign in / Sign up

Export Citation Format

Share Document