计算机科学
一致性(知识库)
社会化媒体
情绪分析
人工智能
特征(语言学)
模态(人机交互)
机器学习
情报检索
特征提取
自然语言处理
万维网
哲学
语言学
作者
Huan Liu,Ké Li,Jianping Fan,Caixia Yan,Tao Qin,Qinghua Zheng
出处
期刊:IEEE Transactions on Affective Computing
[Institute of Electrical and Electronics Engineers]
日期:2023-10-01
卷期号:14 (4): 3332-3344
被引量:4
标识
DOI:10.1109/taffc.2022.3220762
摘要
Social media sentiment analysis, which aims to evaluate the attitudes of online users based on their posts, has attracted significant research attention due to its successful application in the field of social media monitoring. It is a beneficial way to utilize multimodal information uploaded by users in order to improve sentiment classification ability. However, existing multimodal fusion-based approaches continue to face difficulties due to the issues of between-modality semantic inconsistency and missing modality. To address these issues, we propose a cross-modal consistency modeling-based knowledge distillation framework for image–text sentiment classification of social media data. Specifically, we design a hybrid curriculum learning strategy to measure the semantic consistency of multimodal data, then gradually train all image–text pairs from easy to hard, which can effectively handle the massive amounts of noise caused by inconsistencies between image and text data on social media. Moreover, in order to alleviate the problem of missing images in unimodal posts, we propose a privileged feature distillation method, in which the teacher model additionally considers images as privileged features, to transfer the visual knowledge to the student model, thereby enhancing the accuracy for text sentiment classification. Extensive experiments conducted over three real-world social media datasets demonstrate the effectiveness and superiority of the proposed multimodal sentiment analysis model.
科研通智能强力驱动
Strongly Powered by AbleSci AI