scholarly journals The Use of Time-Frequency Moments as Inputs of LSTM Network for ECG Signal Classification

Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1452
Author(s):  
Grzegorz Kłosowski ◽  
Tomasz Rymarczyk ◽  
Dariusz Wójcik ◽  
Stanisław Skowron ◽  
Tomasz Cieplak ◽  
...  

This paper refers to the method of using the deep neural long-short-term memory (LSTM) network for the problem of electrocardiogram (ECG) signal classification. ECG signals contain a lot of subtle information analyzed by doctors to determine the type of heart dysfunction. Due to the large number of signal features that are difficult to identify, raw ECG data is usually not suitable for use in machine learning. The article presents how to transform individual ECG time series into spectral images for which two characteristics are determined, which are instantaneous frequency and spectral entropy. Feature extraction consists of converting the ECG signal into a series of spectral images using short-term Fourier transformation. Then the images were converted using Fourier transform again to two signals, which includes instantaneous frequency and spectral entropy. The data set transformed in this way was used to train the LSTM network. During the experiments, the LSTM networks were trained for both raw and spectrally transformed data. Then, the LSTM networks trained in this way were compared with each other. The obtained results prove that the transformation of input signals into images can be an effective method of improving the quality of classifiers based on deep learning.

2021 ◽  
Vol 11 (20) ◽  
pp. 9708
Author(s):  
Xiaole Cheng ◽  
Te Han ◽  
Peilin Yang ◽  
Xugang Zhang

As an important condition for fatigue analysis and life prediction, load spectrum is widely used in various engineering fields. The extrapolation of load samples is an important step in compiling load spectrum. It is of great significance to select an appropriate load extrapolation method. This paper proposes a load extrapolation method based on long short-term memory (LSTM) network, introduces the basic principle of the extrapolation method, and applies the method to the data set collected under the working state of 5MN metal extruder. The comparison between the extrapolated load data and the actual load shows that the trend of the extrapolated load data is basically consistent with the original tendency. In addition, this method is compared with the rain flow extrapolation method based on statistical distribution. Through the comparison of the short-term load spectrum compiled by the two extrapolation methods, it is found that the load spectrum extrapolation method based on LSTM network can better realize load prediction and optimize the compilation of load spectrum.


2021 ◽  
Vol 22 ◽  
pp. 100507
Author(s):  
Siti Nurmaini ◽  
Alexander Edo Tondas ◽  
Annisa Darmawahyuni ◽  
Muhammad Naufal Rachmatullah ◽  
Jannes Effendi ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1181
Author(s):  
Chenhao Zhu ◽  
Sheng Cai ◽  
Yifan Yang ◽  
Wei Xu ◽  
Honghai Shen ◽  
...  

In applications such as carrier attitude control and mobile device navigation, a micro-electro-mechanical-system (MEMS) gyroscope will inevitably be affected by random vibration, which significantly affects the performance of the MEMS gyroscope. In order to solve the degradation of MEMS gyroscope performance in random vibration environments, in this paper, a combined method of a long short-term memory (LSTM) network and Kalman filter (KF) is proposed for error compensation, where Kalman filter parameters are iteratively optimized using the Kalman smoother and expectation-maximization (EM) algorithm. In order to verify the effectiveness of the proposed method, we performed a linear random vibration test to acquire MEMS gyroscope data. Subsequently, an analysis of the effects of input data step size and network topology on gyroscope error compensation performance is presented. Furthermore, the autoregressive moving average-Kalman filter (ARMA-KF) model, which is commonly used in gyroscope error compensation, was also combined with the LSTM network as a comparison method. The results show that, for the x-axis data, the proposed combined method reduces the standard deviation (STD) by 51.58% and 31.92% compared to the bidirectional LSTM (BiLSTM) network, and EM-KF method, respectively. For the z-axis data, the proposed combined method reduces the standard deviation by 29.19% and 12.75% compared to the BiLSTM network and EM-KF method, respectively. Furthermore, for x-axis data and z-axis data, the proposed combined method reduces the standard deviation by 46.54% and 22.30% compared to the BiLSTM-ARMA-KF method, respectively, and the output is smoother, proving the effectiveness of the proposed method.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Tuan D. Pham

AbstractImage analysis in histopathology provides insights into the microscopic examination of tissue for disease diagnosis, prognosis, and biomarker discovery. Particularly for cancer research, precise classification of histopathological images is the ultimate objective of the image analysis. Here, the time-frequency time-space long short-term memory network (TF-TS LSTM) developed for classification of time series is applied for classifying histopathological images. The deep learning is empowered by the use of sequential time-frequency and time-space features extracted from the images. Furthermore, unlike conventional classification practice, a strategy for class modeling is designed to leverage the learning power of the TF-TS LSTM. Tests on several datasets of histopathological images of haematoxylin-and-eosin and immunohistochemistry stains demonstrate the strong capability of the artificial intelligence (AI)-based approach for producing very accurate classification results. The proposed approach has the potential to be an AI tool for robust classification of histopathological images.


Author(s):  
Zhang Chao ◽  
Wang Wei-zhi ◽  
Zhang Chen ◽  
Fan Bin ◽  
Wang Jian-guo ◽  
...  

Accurate and reliable fault diagnosis is one of the key and difficult issues in mechanical condition monitoring. In recent years, Convolutional Neural Network (CNN) has been widely used in mechanical condition monitoring, which is also a great breakthrough in the field of bearing fault diagnosis. However, CNN can only extract local features of signals. The model accuracy and generalization of the original vibration signals are very low in the process of vibration signal processing only by CNN. Based on the above problems, this paper improves the traditional convolution layer of CNN, and builds the learning module (local feature learning block, LFLB) of the local characteristics. At the same time, the Long Short-Term Memory (LSTM) is introduced into the network, which is used to extract the global features. This paper proposes the new neural network—improved CNN-LSTM network. The extracted deep feature is used for fault classification. The improved CNN-LSTM network is applied to the processing of the vibration signal of the faulty bearing collected by the bearing failure laboratory of Inner Mongolia University of science and technology. The results show that the accuracy of the improved CNN-LSTM network on the same batch test set is 98.75%, which is about 24% higher than that of the traditional CNN. The proposed network is applied to the bearing data collection of Western Reserve University under the condition that the network parameters remain unchanged. The experiment shows that the improved CNN-LSTM network has better generalization than the traditional CNN.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Xiaofei Zhang ◽  
Tao Wang ◽  
Qi Xiong ◽  
Yina Guo

Imagery-based brain-computer interfaces (BCIs) aim to decode different neural activities into control signals by identifying and classifying various natural commands from electroencephalogram (EEG) patterns and then control corresponding equipment. However, several traditional BCI recognition algorithms have the “one person, one model” issue, where the convergence of the recognition model’s training process is complicated. In this study, a new BCI model with a Dense long short-term memory (Dense-LSTM) algorithm is proposed, which combines the event-related desynchronization (ERD) and the event-related synchronization (ERS) of the imagery-based BCI; model training and testing were conducted with its own data set. Furthermore, a new experimental platform was built to decode the neural activity of different subjects in a static state. Experimental evaluation of the proposed recognition algorithm presents an accuracy of 91.56%, which resolves the “one person one model” issue along with the difficulty of convergence in the training process.


2021 ◽  
Vol 3 ◽  
Author(s):  
Yueling Ma ◽  
Carsten Montzka ◽  
Bagher Bayat ◽  
Stefan Kollet

The lack of high-quality continental-scale groundwater table depth observations necessitates developing an indirect method to produce reliable estimation for water table depth anomalies (wtda) over Europe to facilitate European groundwater management under drought conditions. Long Short-Term Memory (LSTM) networks are a deep learning technology to exploit long-short-term dependencies in the input-output relationship, which have been observed in the response of groundwater dynamics to atmospheric and land surface processes. Here, we introduced different input variables including precipitation anomalies (pra), which is the most common proxy of wtda, for the networks to arrive at improved wtda estimates at individual pixels over Europe in various experiments. All input and target data involved in this study were obtained from the simulated TSMP-G2A data set. We performed wavelet coherence analysis to gain a comprehensive understanding of the contributions of different input variable combinations to wtda estimates. Based on the different experiments, we derived an indirect method utilizing LSTM networks with pra and soil moisture anomaly (θa) as input, which achieved the optimal network performance. The regional medians of test R2 scores and RMSEs obtained by the method in the areas with wtd ≤ 3.0 m were 76–95% and 0.17–0.30, respectively, constituting a 20–66% increase in median R2 and a 0.19–0.30 decrease in median RMSEs compared to the LSTM networks only with pra as input. Our results show that introducing θa significantly improved the performance of the trained networks to predict wtda, indicating the substantial contribution of θa to explain groundwater anomalies. Also, the European wtda map reproduced by the method had good agreement with that derived from the TSMP-G2A data set with respect to drought severity, successfully detecting ~41% of strong drought events (wtda ≥ 1.5) and ~29% of extreme drought events (wtda ≥ 2) in August 2015. The study emphasizes the importance to combine soil moisture information with precipitation information in quantifying or predicting groundwater anomalies. In the future, the indirect method derived in this study can be transferred to real-time monitoring of groundwater drought at the continental scale using remotely sensed soil moisture and precipitation observations or respective information from weather prediction models.


Energies ◽  
2021 ◽  
Vol 14 (18) ◽  
pp. 5762
Author(s):  
Syed Basit Ali Bukhari ◽  
Khawaja Khalid Mehmood ◽  
Abdul Wadood ◽  
Herie Park

This paper presents a new intelligent islanding detection scheme (IIDS) based on empirical wavelet transform (EWT) and long short-term memory (LSTM) network to identify islanding events in microgrids. The concept of EWT is extended to extract features from three-phase signals. First, the three-phase voltage signals sampled at the terminal of targeted distributed energy resource (DER) or point of common coupling (PCC) are decomposed into empirical modes/frequency subbands using EWT. Then, instantaneous amplitudes and instantaneous frequencies of the three-phases at different frequency subbands are combined, and various statistical features are calculated. Finally, the EWT-based features along with the three-phase voltage signals are input to the LSTM network to differentiate between non-islanding and islanding events. To assess the efficacy of the proposed IIDS, extensive simulations are performed on an IEC microgrid and an IEEE 34-node system. The simulation results verify the effectiveness of the proposed IIDS in terms of non-detection zone (NDZ), computational time, detection accuracy, and robustness against noisy measurement. Furthermore, comparisons with existing intelligent methods and different LSTM architectures demonstrate that the proposed IIDS offers higher reliability by significantly reducing the NDZ and stands robust against measurements uncertainty.


2021 ◽  
Vol 17 (12) ◽  
pp. 155014772110612
Author(s):  
Zhengqiang Ge ◽  
Xinyu Liu ◽  
Qiang Li ◽  
Yu Li ◽  
Dong Guo

To significantly protect the user’s privacy and prevent the user’s preference disclosure from leading to malicious entrapment, we present a combination of the recommendation algorithm and the privacy protection mechanism. In this article, we present a privacy recommendation algorithm, PrivItem2Vec, and the concept of the recommended-internet of things, which is a privacy recommendation algorithm, consisting of user’s information, devices, and items. Recommended-internet of things uses bidirectional long short-term memory, based on item2vec, which improves algorithm time series and the recommended accuracy. In addition, we reconstructed the data set in conjunction with the Paillier algorithm. The data on the server are encrypted and embedded, which reduces the readability of the data and ensures the data’s security to a certain extent. Experiments show that our algorithm is superior to other works in terms of recommended accuracy and efficiency.


Sign in / Sign up

Export Citation Format

Share Document