模态(人机交互)
线性子空间
模式
计算机科学
子空间拓扑
任务(项目管理)
人工智能
不变(物理)
自然语言处理
机器学习
数学
工程类
数学物理
社会学
系统工程
社会科学
几何学
作者
Devamanyu Hazarika,Roger Zimmermann,Soujanya Poria
出处
期刊:Cornell University - arXiv
日期:2020-05-07
被引量:61
标识
DOI:10.48550/arxiv.2005.03545
摘要
Multimodal Sentiment Analysis is an active area of research that leverages\nmultimodal signals for affective understanding of user-generated videos. The\npredominant approach, addressing this task, has been to develop sophisticated\nfusion techniques. However, the heterogeneous nature of the signals creates\ndistributional modality gaps that pose significant challenges. In this paper,\nwe aim to learn effective modality representations to aid the process of\nfusion. We propose a novel framework, MISA, which projects each modality to two\ndistinct subspaces. The first subspace is modality-invariant, where the\nrepresentations across modalities learn their commonalities and reduce the\nmodality gap. The second subspace is modality-specific, which is private to\neach modality and captures their characteristic features. These representations\nprovide a holistic view of the multimodal data, which is used for fusion that\nleads to task predictions. Our experiments on popular sentiment analysis\nbenchmarks, MOSI and MOSEI, demonstrate significant gains over state-of-the-art\nmodels. We also consider the task of Multimodal Humor Detection and experiment\non the recently proposed UR_FUNNY dataset. Here too, our model fares better\nthan strong baselines, establishing MISA as a useful multimodal framework.\n
科研通智能强力驱动
Strongly Powered by AbleSci AI