计算机科学
模式
情态动词
模态(人机交互)
人工智能
任务(项目管理)
特征(语言学)
特征提取
保险丝(电气)
语音识别
模式识别(心理学)
机器学习
社会学
高分子化学
管理
化学
经济
哲学
工程类
电气工程
语言学
社会科学
作者
Vandana Rajan,Alessio Brutti,Andrea Cavallaro
标识
DOI:10.1109/icassp43922.2022.9746924
摘要
Humans express their emotions via facial expressions, voice intonation and word choices. To infer the nature of the underlying emotion, recognition models may use a single modality, such as vision, audio, and text, or a combination of modalities. Generally, models that fuse complementary information from multiple modalities outperform their uni-modal counterparts. However, a successful model that fuses modalities requires components that can effectively aggregate task-relevant information from each modality. As cross-modal attention is seen as an effective mechanism for multi-modal fusion, in this paper we quantify the gain that such a mechanism brings compared to the corresponding self-attention mechanism. To this end, we implement and compare a cross-attention and a self-attention model. In addition to attention, each model uses convolutional layers for local feature extraction and recurrent layers for global sequential modelling. We compare the models using different modality combinations for a 7-class emotion classification task using the IEMOCAP dataset. Experimental results indicate that albeit both models improve upon the state-of-the-art in terms of weighted and unweighted accuracy for tri- and bi-modal configurations, their performance is generally statistically comparable. The code to replicate the experiments is available at https://github.com/smartcameras/SelfCrossAttn
科研通智能强力驱动
Strongly Powered by AbleSci AI