计算机科学
模式
多模态
模式治疗法
人工智能
情绪分析
变压器
自然语言处理
万维网
社会科学
量子力学
医学
物理
外科
社会学
电压
作者
Ruohong Huan,Guowei Zhong,Peng Chen,Ronghua Liang
标识
DOI:10.1109/tmm.2023.3338769
摘要
In current multimodal sentiment analysis, aligned and complete multimodal sequences are often crucial. Obtaining complete multimodal data in the real world presents various challenges, and aligning multimodal sequences often requires a significant amount of effort. Unfortunately, most multimodal sentiment analysis methods fail when dealing with missing modalities or unaligned multimodal sequences. To tackle these two challenges simultaneously in a simple and lightweight manner, we present the Unified Multimodal Framework (UniMF). The primary components of UniMF comprise two distinct modules. The first module, Translation Module, translates missing modalities using information from existing modalities. The second module, Prediction Module, uses the attention mechanism to fuse the multimodal information and generate predictions. To enhance the translation performance of the Translation Module, we introduce the Multimodal Generation Mask (MGM) and utilize it to construct the Multimodal Generation Transformer (MGT). The MGT can generate the missing modality while focusing on information from existing modalities. Furthermore, we introduce the Multimodal Understanding Transformer (MUT) in the Prediction Module, which includes the Multimodal Understanding Mask (MUM) and a unique sequence, MultiModalSequence ( MMSeq ), representing a unified multimodality. To assess the performance of UniMF, we perform experiments on four multimodal sentiment datasets, and UniMF attains competitive or state-of-the-art outcomes with fewer learnable parameters. Furthermore, the experimental outcomes signify that UniMF, supported by MGT and MUT - two transformers utilizing special attention mechanisms, can efficiently manage both generating task of missing modalities and understanding task of unaligned multimodal sequences.
科研通智能强力驱动
Strongly Powered by AbleSci AI