解码方法
脑电图
计算机科学
卷积神经网络
功率(物理)
情绪识别
人工智能
心理学
语音识别
模式识别(心理学)
神经科学
电信
物理
量子力学
作者
Weichen Huang,Wenlong Wang,Yuanqing Li,Wei Wu
出处
期刊:IEEE Transactions on Affective Computing
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:: 1-13
被引量:3
标识
DOI:10.1109/taffc.2024.3385651
摘要
Electroencephalography (EEG)-based emotion recognition plays a key role in the development of affective brain-computer interfaces (BCIs). However, emotions are complex and extracting salient EEG features underlying distinct emotional states is inherently limited by low signal-to-noise ratio (SNR) and low spatial resolution of practical EEG data, which is further compounded by the lack of effective spatio-temporal filter optimization approaches for generic EEG features. To address these challenges, this study proposes a set of neural networks termed the Filter-Bank Spatio-Temporal Convolutional Networks (FBSTCNets) for performing end-to-end multi-class emotion recognition via robust extraction of power and/or connectivity features from EEG. First, a filter bank is employed to construct a multiview spectral representation of EEG data. Next, a temporal convolutional layer, followed by a depth-wise spatial convolutional layer, performs spatio-temporal filtering, transforming EEG into latent signals with higher SNR. A feature extraction layer then extracts power and/or connectivity features from the latent signals. Finally, a fully connected layer with a cropped decoding strategy predicts the emotional state. Experimental results on two public emotion EEG datasets, SEED and SEED-IV, demonstrate that FBSTCNets outperform previous benchmark methods in decoding accuracy. Our approach provides a principled emotion decoding framework for designing high-performance spatio-temporal filtering networks tailored to specific EEG feature types. The FBSTCNet source code is available at https://github.com/TimeSpacerRob/FBSTCNet .
科研通智能强力驱动
Strongly Powered by AbleSci AI