自编码
计算机科学
正规化(语言学)
情态动词
人工智能
人工神经网络
代表(政治)
机器学习
模式识别(心理学)
语音识别
政治学
政治
化学
高分子化学
法学
作者
Zhaoran Wang,Shuang Qiu,Dan Li,Changde Du,Bao-Liang Lu,Huiguang He
出处
期刊:IEEE/CAA Journal of Automatica Sinica
[Institute of Electrical and Electronics Engineers]
日期:2022-09-01
卷期号:9 (9): 1612-1626
被引量:32
标识
DOI:10.1109/jas.2022.105515
摘要
Traditional electroencephalograph (EEG)-based emotion recognition requires a large number of calibration samples to build a model for a specific subject, which restricts the application of the affective brain computer interface (BCI) in practice. We attempt to use the multi-modal data from the past session to realize emotion recognition in the case of a small amount of calibration samples. To solve this problem, we propose a multi-modal domain adaptive variational autoencoder (MMDA-VAE) method, which learns shared cross-domain latent representations of the multi-modal data. Our method builds a multi-modal variational autoencoder (MVAE) to project the data of multiple modalities into a common space. Through adversarial learning and cycle-consistency regularization, our method can reduce the distribution difference of each domain on the shared latent representation layer and realize the transfer of knowledge. Extensive experiments are conducted on two public datasets, SEED and SEED-IV, and the results show the superiority of our proposed method. Our work can effectively improve the performance of emotion recognition with a small amount of labelled multi-modal data.
科研通智能强力驱动
Strongly Powered by AbleSci AI