计算机科学
情绪分析
人工智能
融合
自然语言处理
机器学习
语言学
哲学
作者
Juan Yang,Mengya Xu,Yali Xiao,Xu Du
出处
期刊:Neurocomputing
[Elsevier]
日期:2024-01-05
卷期号:573: 127222-127222
被引量:2
标识
DOI:10.1016/j.neucom.2023.127222
摘要
Aspect-based sentiment analysis (ABSA), which aims to analyze users' sentiment towards the targeted aspect, has recently gained increasing attention due to its importance in supporting corresponding decision-makings in various tasks. Most existing ABSA studies primarily depend on only textual modality, but ignore the fact that in many cases the targeted aspect doesn't appear in the sentence. Thus, multimodal ABSA is expected to alleviate this dilemma. However, most existing MABSA approaches still suffer from the following limitations: (1) ignoring the possible aspect-image irrelevant issue; (2) ignoring the coarse-grained interaction between the sentence and its associated image; (3) failing to simultaneously leverage multiple types of useful knowledge information. To address these issues, we propose an aspect-guided multi-view interactions and fusion network (AMIFN) for MABSA. Specifically, we utilize the multi-head attention mechanism to generate aspect-guided textual representation, which is used as the extended aspect semantic for guiding the subsequent aspect-related interactions. When exploring aspect-guided visual representation, we employ the image gate to dynamically filter potential noise introduced by the associated image to generate the final image representation. Meanwhile, the coarse-grained sentence-image interaction, which contains context and semantics information, and the syntactic dependencies, are leveraged for graph construction to obtain aspect-guided text-image interaction representations. Finally, the extracted multi-view interaction representations are integrated for sentiment classification. Extensive experimental results on three multimodal benchmark datasets demonstrate the superiority and rationality of AMIFN.
科研通智能强力驱动
Strongly Powered by AbleSci AI