计算机科学
情绪分析
悲伤
人工智能
自然语言处理
厌恶
判决
图形
心理学
愤怒
理论计算机科学
精神科
作者
Tong Zhu,Leida Li,Jufeng Yang,Sicheng Zhao,Xiao Xiao
标识
DOI:10.1109/tmm.2022.3214989
摘要
Nowadays, people are accustomed to posting images and associated text for expressing their emotions on social networks. Accordingly, multimodal sentiment analysis has drawn increasingly more attention. Most of the existing image-text multimodal sentiment analysis methods simply predict the sentiment polarity. However, the same sentiment polarity may correspond to quite different emotions, such as happiness vs. excitement and disgust vs. sadness. Therefore, sentiment polarity is ambiguous and may not convey the accurate emotions that people want to express. Psychological research has shown that objects and words are emotional stimuli and that semantic concepts can affect the role of stimuli. Inspired by this observation, this paper presents a new MUlti-Level SEmantic Reasoning network (MULSER) for fine-grained image-text multimodal emotion classification, which not only investigates the semantic relationship among objects and words respectively, but also explores the semantic relationship between regional objects and global concepts. For image modality, we first build graphs to extract objects and global representation, and employ a graph attention module to perform bilevel semantic reasoning. Then, a joint visual graph is built to learn the regional-global semantic relations. For text modality, we build a word graph and further apply graph attention to reinforce the interdependencies among words in a sentence. Finally, a cross-modal attention fusion module is proposed to fuse semantic-enhanced visual and textual features, based on which informative multimodal representations are obtained for fine-grained emotion classification. The experimental results on public datasets demonstrate the superiority of the proposed model over the state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI