GTSception: A Deep Learning EEG Emotion Recognition Model Based on Fusion of Global, Time Domain and Frequency Domain Feature Extraction

Author(s):  
Jian Zhao ◽  
ZhiWei Zhang ◽  
Jinping Qiu ◽  
Lijuan Shi ◽  
Zhejun KUANG ◽  
...  

Abstract With the rapid development of deep learning in recent years, automatic electroencephalography (EEG) emotion recognition has been widely concerned. At present, most deep learning methods do not normalize EEG data properly and do not fully extract the features of time and frequency domain, which will affect the accuracy of EEG emotion recognition. To solve these problems, we propose GTScepeion, a deep learning EEG emotion recognition model. In pre-processing, the EEG time slicing data including channels were pre-processed. In our model, global convolution kernels are used to extract overall semantic features, followed by three kinds of temporal convolution kernels representing different emotional periods, followed by two kinds of spatial convolution kernels highlighting brain hemispheric differences to extract spatial features, and finally emotions are dichotomy classified by the full connected layer. The experiments is based on the DEAP dataset, and our model can effectively normalize the data and fully extract features. For Arousal, ours is 8.76% higher than the current optimal emotion recognition model based on Inception. For Valence, the best accuracy of our model reaches 91.51%.

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Mingyong Li ◽  
Xue Qiu ◽  
Shuang Peng ◽  
Lirong Tang ◽  
Qiqi Li ◽  
...  

With the rapid development of deep learning and wireless communication technology, emotion recognition has received more and more attention from researchers. Computers can only be truly intelligent when they have human emotions, and emotion recognition is its primary consideration. This paper proposes a multimodal emotion recognition model based on a multiobjective optimization algorithm. The model combines voice information and facial information and can optimize the accuracy and uniformity of recognition at the same time. The speech modal is based on an improved deep convolutional neural network (DCNN); the video image modal is based on an improved deep separation convolution network (DSCNN). After single mode recognition, a multiobjective optimization algorithm is used to fuse the two modalities at the decision level. The experimental results show that the proposed model has a large improvement in each evaluation index, and the accuracy of emotion recognition is 2.88% higher than that of the ISMS_ALA model. The results show that the multiobjective optimization algorithm can effectively improve the performance of the multimodal emotion recognition model.


2021 ◽  
Author(s):  
Naveen Kumari ◽  
Rekha Bhatia

Abstract Facial emotion recognition extracts the human emotions from the images and videos. As such, it requires an algorithm to understand and model the relationships between faces and facial expressions, and to recognize human emotions. Recently, deep learning models are extensively utilized enhance the facial emotion recognition rate. However, the deep learning models suffer from the overfitting issue. Moreover, deep learning models perform poorly for images which have poor visibility and noise. Therefore, in this paper, a novel deep learning based facial emotion recognition tool is proposed. Initially, a joint trilateral filter is applied to the obtained dataset to remove the noise. Thereafter, contrast-limited adaptive histogram equalization (CLAHE) is applied to the filtered images to improve the visibility of images. Finally, a deep convolutional neural network is trained. Nadam optimizer is also utilized to optimize the cost function of deep convolutional neural networks. Experiments are achieved by using the benchmark dataset and competitive human emotion recognition models. Comparative analysis demonstrates that the proposed facial emotion recognition model performs considerably better compared to the competitive models.


Mathematics ◽  
2021 ◽  
Vol 9 (23) ◽  
pp. 3035
Author(s):  
Feiyue Deng ◽  
Yan Bi ◽  
Yongqiang Liu ◽  
Shaopu Yang

Remaining useful life (RUL) prediction of key components is an important influencing factor in making accurate maintenance decisions for mechanical systems. With the rapid development of deep learning (DL) techniques, the research on RUL prediction based on the data-driven model is increasingly widespread. Compared with the conventional convolution neural networks (CNNs), the multi-scale CNNs can extract different-scale feature information, which exhibits a better performance in the RUL prediction. However, the existing multi-scale CNNs employ multiple convolution kernels with different sizes to construct the network framework. There are two main shortcomings of this approach: (1) the convolution operation based on multiple size convolution kernels requires enormous computation and has a low operational efficiency, which severely restricts its application in practical engineering. (2) The convolutional layer with a large size convolution kernel needs a mass of weight parameters, leading to a dramatic increase in the network training time and making it prone to overfitting in the case of small datasets. To address the above issues, a multi-scale dilated convolution network (MsDCN) is proposed for RUL prediction in this article. The MsDCN adopts a new multi-scale dilation convolution fusion unit (MsDCFU), in which the multi-scale network framework is composed of convolution operations with different dilated factors. This effectively expands the range of receptive field (RF) for the convolution kernel without an additional computational burden. Moreover, the MsDCFU employs the depthwise separable convolution (DSC) to further improve the operational efficiency of the prognostics model. Finally, the proposed method was validated with the accelerated degradation test data of rolling element bearings (REBs). The experimental results demonstrate that the proposed MSDCN has a higher RUL prediction accuracy compared to some typical CNNs and better operational efficiency than the existing multi-scale CNNs based on different convolution kernel sizes.


2020 ◽  
Vol 29 (02) ◽  
pp. 1
Author(s):  
Qihua Xu ◽  
Chunyue Zhang ◽  
Bo Sun

2020 ◽  
Vol 2020 ◽  
pp. 1-8
Author(s):  
Lejun Gong ◽  
Zhifei Zhang ◽  
Shiqi Chen

Background. Clinical named entity recognition is the basic task of mining electronic medical records text, which are with some challenges containing the language features of Chinese electronic medical records text with many compound entities, serious missing sentence components, and unclear entity boundary. Moreover, the corpus of Chinese electronic medical records is difficult to obtain. Methods. Aiming at these characteristics of Chinese electronic medical records, this study proposed a Chinese clinical entity recognition model based on deep learning pretraining. The model used word embedding from domain corpus and fine-tuning of entity recognition model pretrained by relevant corpus. Then BiLSTM and Transformer are, respectively, used as feature extractors to identify four types of clinical entities including diseases, symptoms, drugs, and operations from the text of Chinese electronic medical records. Results. 75.06% Macro-P, 76.40% Macro-R, and 75.72% Macro-F1 aiming at test dataset could be achieved. These experiments show that the Chinese clinical entity recognition model based on deep learning pretraining can effectively improve the recognition effect. Conclusions. These experiments show that the proposed Chinese clinical entity recognition model based on deep learning pretraining can effectively improve the recognition performance.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Ming Li ◽  
Dezhi Han ◽  
Xinming Yin ◽  
Han Liu ◽  
Dun Li

With the rapid development and widespread application of cloud computing, cloud computing open networks and service sharing scenarios have become more complex and changeable, causing security challenges to become more severe. As an effective means of network protection, anomaly network traffic detection can detect various known attacks. However, there are also some shortcomings. Deep learning brings a new opportunity for the further development of anomaly network traffic detection. So far, the existing deep learning models cannot fully learn the temporal and spatial features of network traffic and their classification accuracy needs to be improved. To fill this gap, this paper proposes an anomaly network traffic detection model integrating temporal and spatial features (ITSN) using a three-layer parallel network structure. ITSN learns the temporal and spatial features of the traffic and fully fuses these two features through feature fusion technology to improve the accuracy of network traffic classification. On this basis, an improved method of raw traffic feature extraction is proposed, which can reduce redundant features, speed up the convergence of the network, and ease the imbalance of the datasets. The experimental results on the ISCX-IDS 2012 and CICIDS 2017 datasets show that the ITSN can improve the accuracy of anomaly network traffic detection while enhancing the robustness of the detection system and has a higher recognition rate for positive samples.


2020 ◽  
Vol 140 ◽  
pp. 358-365
Author(s):  
Zijiang Zhu ◽  
Weihuang Dai ◽  
Yi Hu ◽  
Junshan Li

Sign in / Sign up

Export Citation Format

Share Document