讽刺
计算机科学
人工智能
情态动词
自然语言处理
判决
情绪分析
对话
任务(项目管理)
语言学
心理学
沟通
讽刺
哲学
化学
管理
高分子化学
经济
作者
Yazhou Zhang,Yang Yu,Dongming Zhao,Zuhe Li,Bo Wang,Yuexian Hou,Prayag Tiwari,Jing Qin
出处
期刊:IEEE transactions on artificial intelligence
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:: 1-13
被引量:2
标识
DOI:10.1109/tai.2023.3298328
摘要
Sarcasm is a form of figurative language device to express human inner feelings, where the author writes the positive sentence on surface form, while he/she actually expresses negative sentiment, vice versa. Sentiment thus comes into sight, and is closely related with sarcasm, leading to the recent popularity of multi-modal sarcasm and sentiment joint detection in conversation (dialogue). The key challenges involve multi-modal fusion and multi-task interaction. Most of the existing studies have focused on building multi-modal fused representation, while the commonness and uniqueness across related tasks has not received attention. To fill this gap, we propose a multi-modal multi-task interaction learning framework, termed MIL, for joint detection of sarcasm and sentiment. Specifically, a cross-modal target attention mechanism is proposed to automatically learn the alignment between texts and images/speeches. In addition, a multi-modal interaction learning paradigm consisting of a dual-gating network, three separate fully-connected layers that simultaneously capture the commonness and uniqueness. Comprehensive experiments on two benchmarking datasets (i.e., Memotion and MUStARD) show the effectiveness of the proposed model over state-of-the-art baselines with a significant improvement of 1.9%, 2.4% in terms of F1.
科研通智能强力驱动
Strongly Powered by AbleSci AI