计算机科学
情态动词
人工智能
视觉推理
答疑
语义学(计算机科学)
代表(政治)
可视化
情报检索
关系(数据库)
对象(语法)
自然语言处理
数据挖掘
程序设计语言
政治
化学
高分子化学
法学
政治学
作者
Jing Yu,Zhang Wei-feng,Yuhang Lu,Zengchang Qin,Yue Hu,Jianlong Tan,Qi Wu
标识
DOI:10.1109/tmm.2020.2972830
摘要
Cross-modal analysis has become a promising direction for artificial intelligence. Visual representation is crucial for various cross-modal analysis tasks that require visual content understanding. Visual features which contain semantical information can disentangle the underlying correlation between different modalities, thus benefiting the downstream tasks. In this paper, we propose a Visual Reasoning and Attention Network (VRANet) as a plug-and-play module to capture rich visual semantics and help to enhance the visual representation for improving cross-modal analysis. Our proposed VRANet is built based on the bilinear visual attention module which identifies the critical objects. We propose a novel Visual Relational Reasoning (VRR) module to reason about pair-wise and inner-group visual relationships among objects guided by the textual information. The two modules enhance the visual features at both relation level and object level. We demonstrate the effectiveness of the proposed VRANet by applying it to both Visual Question Answering (VQA) and Cross-Modal Information Retrieval (CMIR) tasks. Extensive experiments conducted on VQA 2.0, CLEVR, CMPlaces, and MS-COCO datasets indicate superior performance comparing with state-of-the-art work.
科研通智能强力驱动
Strongly Powered by AbleSci AI