水准点(测量)
计算机科学
人工智能
特征学习
特征(语言学)
代表(政治)
情绪分析
机器学习
无监督学习
多模式学习
模式识别(心理学)
语言学
哲学
大地测量学
政治
政治学
法学
地理
作者
Youjia Fu,Junsong Fu,Huixia Xue,Zihao Xu
出处
期刊:Electronics
[MDPI AG]
日期:2024-07-18
卷期号:13 (14): 2835-2835
标识
DOI:10.3390/electronics13142835
摘要
Multimodal Sentiment Analysis (MSA) plays a critical role in many applications, including customer service, personal assistants, and video understanding. Currently, the majority of research on MSA is focused on the development of multimodal representations, largely owing to the scarcity of unimodal annotations in MSA benchmark datasets. However, the sole reliance on multimodal representations to train models results in suboptimal performance due to the insufficient learning of each unimodal representation. To this end, we propose Self-HCL, which initially optimizes the unimodal features extracted from a pretrained model through the Unimodal Feature Enhancement Module (UFEM), and then uses these optimized features to jointly train multimodal and unimodal tasks. Furthermore, we employ a Hybrid Contrastive Learning (HCL) strategy to facilitate the learned representation of multimodal data, enhance the representation ability of multimodal fusion through unsupervised contrastive learning, and improve the model’s performance in the absence of unimodal annotations through supervised contrastive learning. Finally, based on the characteristics of unsupervised contrastive learning, we propose a new Unimodal Label Generation Module (ULGM) that can stably generate unimodal labels in a short training period. Extensive experiments on the benchmark datasets CMU-MOSI and CMU-MOSEI demonstrate that our model outperforms state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI