计算机科学
模态(人机交互)
人工智能
模式
卷积神经网络
源代码
特征(语言学)
特征学习
变压器
模式识别(心理学)
机器学习
社会科学
语言学
电压
社会学
哲学
物理
操作系统
量子力学
作者
Jiehui Huang,Jun Zhou,Zhenchao Tang,Jiaying Lin,Calvin Yu‐Chian Chen
标识
DOI:10.1016/j.knosys.2023.111346
摘要
Multimodal emotion analysis is an important endeavor in human–computer interaction research, as it enables the accurate identification of an individual's emotional state by simultaneously analyzing text, video, and sound features. Although current emotion recognition algorithms have performed well using multimodal fusion strategies, two key challenges remain. The first challenge is the efficient extraction of modality-invariant and modality-specific features prior to fusion, which requires deep feature interactions between the different modalities. The second challenge concerns the ability to distinguish high-level semantic relations between modality features. To address these issues, we propose a new modality-binding learning framework and redesign the internal structure of the transformer model. Our proposed modality binding learning model addresses the first challenge by incorporating bimodal and trimodal binding mechanisms. These mechanisms handle modality-specific and modality-invariant features, respectively, and facilitate cross-modality interactions. Furthermore, we enhance feature interactions by introducing fine-grained convolution modules in the feedforward and attention layers of the transformer structure. To address the second issue, we introduce CLS and PE feature vectors for modality-invariant and modality-specific features, respectively. We use similarity loss and dissimilarity loss to support model convergence. Experiments on the widely used MOSI and MOSEI datasets show that our proposed method outperforms state-of-the-art multimodal sentiment classification approaches, confirming its effectiveness and superiority. The source code can be found at https://github.com/JackAILab/TMBL.
科研通智能强力驱动
Strongly Powered by AbleSci AI