计算机科学
人工智能
卷积神经网络
模式识别(心理学)
特征提取
脑电图
人工神经网络
特征(语言学)
深度学习
语音识别
心理学
语言学
精神科
哲学
作者
Mei-yu Zhong,Yang Qing-yu,Yi Liu,Bo-yu Zhen,Feng-da Zhao,Xie Bing-bing
标识
DOI:10.1016/j.bspc.2022.104211
摘要
Electroencephalogram (EEG)-based emotion recognition has gained high attention in Brain-Computer Interfaces. However, due to the non-linearity and non-stationarity of EEG signals, it is difficult to analyze and extract effective emotional information from these signals. In this paper, a novel EEG-based emotion recognition framework is proposed, which includes Tunable Q-factor Wavelet Transform (TQWT)-feature extraction method, a new spatiotemporal representation of multichannel EEG signals and a Hybrid Convolutional Recurrent Neural Network (HCRNN). According to the oscillation behavior of signals, TQWT is first employed to decompose EEG into several sub-bands with stationarity characteristics. The features of mean absolute value and differential entropy are extracted from these sub-bands and named as TQWT-features. Next, the TQWT-features are transformed into TQWT-Feature Block Sequences (TFBSs) as the spatiotemporal representation to train the deep model. Then, the HCRNN model is introduced, which is fused by a lightweight Convolutional Neural Network (CNN) and a recurrent neural network with Long-Short Term Memory (LSTM). CNN is utilized to learn the spatial correlated context information of TFBSs, and LSTM is further adopted to capture the temporal dependency from CNN’s outputs. Finally, extensive subject-dependent experiments are carried out on SEED dataset to classify positive, neutral, negative emotional states. The experimental results demonstrate that the TQWT-features in high-frequency sub-bands are effective for EEG-based emotion recognition tasks. The recognition accuracy of HCRNN with TFBSs achieves superior performance (95.33 ± 1.39 %), which outperforms state-of-the-art deep learning models.
科研通智能强力驱动
Strongly Powered by AbleSci AI