情绪识别
情态动词
脑电图
计算机科学
变压器
语音识别
心理学
电压
工程类
神经科学
电气工程
材料科学
高分子化学
作者
Wei-Bang Jiang,Ziyi Li,Wei‐Long Zheng,Bao-Liang Lu
标识
DOI:10.1109/icassp48485.2024.10446937
摘要
Multimodal emotion recognition based on electroencephalography (EEG) and eye movements has attracted increasing attention due to their high performance and complementary properties. However, there are two challenges that hinder its practical applications: the inconvenient EEG data collection and high-cost data annotation. In contrast, eye movements are convenient to obtain and process in real scenarios. To combine high performance of EEG and easy setups of eye tracking, we propose a novel EEG-assisted Contrastive Learning Framework with a Functional Emotion Transformer (ECO-FET) for cross-modal emotion recognition. ECO-FET leverages both the functional brain connectivity and the spectral-spatial-temporal domain of EEG signals simultaneously, which dramatically benefit the learning of eye movements. The whole process consists of three phases: pre-training, test, and fine-tuning. ECO-FET exploits the complementary information provided by multiple modalities during pre-training in order to improve the performance of unimodal models. In the pre-training phase, unlabeled EEG and eye movement data are fed into the model to contrastively learn the emotional latent representations between the two modalities, while in the test phase, eye movements and few labeled EEG samples are used to predict different emotions. Experimental results on three public datasets demonstrate that ECO-FET surpasses the state-of-the-art dramatically.
科研通智能强力驱动
Strongly Powered by AbleSci AI