脑电图
计算机科学
脑-机接口
模式识别(心理学)
语音识别
人工智能
域适应
情绪识别
领域(数学分析)
不变(物理)
情绪分类
交叉验证
适应(眼睛)
频域
数学
分类器(UML)
心理学
数学分析
神经科学
精神科
数学物理
计算机视觉
作者
Qingshan She,Chenqi Zhang,Feng Fang,Yuliang Ma,Yingchun Zhang
出处
期刊:IEEE Transactions on Instrumentation and Measurement
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:72: 1-12
被引量:28
标识
DOI:10.1109/tim.2023.3277985
摘要
Emotion recognition is important in the application of brain-computer interface (BCI). Building a robust emotion recognition model across subjects and sessions is critical in emotion based BCI systems. Electroencephalogram (EEG) is a widely used tool to recognize different emotion states. However, EEG has disadvantages such as small amplitude, low signal-to-noise ratio, and non-stationary properties, resulting in large differences across subjects. To solve these problems, this paper proposes a new emotion recognition method based on a multi-source associate domain adaptation network, considering both domain invariant and domain-specific features. First, separate branches were constructed for multiple source domains, assuming that different EEG data shared the same low-level features. Secondly, the domain specific features were extracted by using the one-to-one associate domain adaptation. Then, the weighted scores of specific sources were obtained according to the distribution distance, and multiple source classifiers were deduced with the corresponding weighted scores. Finally, EEG emotion recognition experiments were conducted on different datasets, including SEED, DEAP, and SEED-IV dataset. Results indicated that, in the cross-subject experiment, the average accuracy in SEED dataset was 86.16%, DEAP dataset was 65.59%, and SEED-IV was 59.29%. In the cross-session experiment, the accuracies of SEED and SEED-IV datasets were 91.10% and 66.68%, respectively. Our proposed method has achieved better classification results compared to state-of-the-art domain adaptation methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI