情态动词
特征(语言学)
判别式
特征向量
情绪识别
领域(数学分析)
适应(眼睛)
空格(标点符号)
计算机科学
人工智能
模式
模态(人机交互)
会话(web分析)
模式识别(心理学)
心理学
语音识别
机器学习
数学
神经科学
化学
社会学
哲学
万维网
高分子化学
数学分析
操作系统
语言学
社会科学
作者
Magdiel Jiménez-Guarneros,Gibrán Fuentes-Pineda
出处
期刊:IEEE Transactions on Affective Computing
[Institute of Electrical and Electronics Engineers]
日期:2024-01-23
卷期号:15 (3): 1502-1513
被引量:4
标识
DOI:10.1109/taffc.2024.3357656
摘要
Multi-modal classifiers for emotion recognition have become prominent, as the emotional states of subjects can be more comprehensively inferred from Electroencephalogram (EEG) signals and eye movements. However, existing classifiers experience a decrease in performance due to the distribution shift when applied to new users. Unsupervised domain adaptation (UDA) emerges as a solution to address the distribution shift between subjects by learning a shared latent feature space. Nevertheless, most UDA approaches focus on a single modality, while existing multi-modal approaches do not consider that fine-grained structures should also be explicitly aligned and the learned feature space must be discriminative. In this paper, we propose Coarse and Fine-grained Distribution Alignment with Correlated and Separable Features (CFDA-CSF), which performs a coarse alignment over the global feature space, and a fine-grained alignment between modalities from each domain distribution. At the same time, the model learns intra-domain correlated features, while a separable feature space is encouraged on new subjects. We conduct an extensive experimental study across the available sessions on three public datasets for multi-modal emotion recognition: SEED, SEED-IV, and SEED-V. Our proposal effectively improves the recognition performance in every session, achieving an average accuracy of 93.05%, 85.87% and 91.20% for SEED; 85.72%, 89.60%, and 86.88% for SEED-IV; and 88.49%, 91.37% and 91.57% for SEED-V.
科研通智能强力驱动
Strongly Powered by AbleSci AI