Cross-modal knowledge reasoning for knowledge-based visual question answering

计算机科学 答疑 语义记忆 视觉推理 可解释性 人工智能 图形 传递关系 认知 理论计算机科学 神经科学 组合数学 数学 生物
作者
Jing Yu,Zihao Zhu,Yujing Wang,Weifeng Zhang,Yue Hu,Jianlong Tan
出处
期刊:Pattern Recognition [Elsevier BV]
卷期号:108: 107563-107563 被引量:92
标识
DOI:10.1016/j.patcog.2020.107563
摘要

• Using multiple knowledge graphs from the visual, semantic and factual views to depict the multimodal knowledge. • A memory-based recurrent model for multi-step knowledge reasoning over graphstructured multimodal knowledge. • Good interpretability to reveal the knowledge selection mode from different modalities. • Significant improvement over state-of-the-art approaches on three benchmark datasets. Knowledge-based Visual Question Answering (KVQA) requires external knowledge beyond the visible content to answer questions about an image. This ability is challenging but indispensable to achieve general VQA. One limitation of existing KVQA solutions is that they jointly embed all kinds of information without fine-grained selection, which introduces unexpected noises for reasoning the correct answer. How to capture the question-oriented and information-complementary evidence remains a key challenge to solve the problem. Inspired by the human cognition theory, in this paper, we depict an image by multiple knowledge graphs from the visual, semantic and factual views. Thereinto, the visual graph and semantic graph are regarded as image-conditioned instantiation of the factual graph. On top of these new representations, we re-formulate Knowledge-based Visual Question Answering as a recurrent reasoning process for obtaining complementary evidence from multimodal information. To this end, we decompose the model into a series of memory-based reasoning steps, each performed by a G raph-based R ead, U pdate, and C ontrol ( GRUC ) module that conducts parallel reasoning over both visual and semantic information. By stacking the modules multiple times, our model performs transitive reasoning and obtains question-oriented concept representations under the constrain of different modalities. Finally, we perform graph neural networks to infer the global-optimal answer by jointly considering all the concepts. We achieve a new state-of-the-art performance on three popular benchmark datasets, including FVQA, Visual7W-KB and OK-VQA, and demonstrate the effectiveness and interpretability of our model with extensive experiments. The source code is available at: https://github.com/astro-zihao/gruc
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
北栀发布了新的文献求助10
1秒前
田様应助合适的落落采纳,获得10
1秒前
hjygzv关注了科研通微信公众号
2秒前
IJT完成签到 ,获得积分10
2秒前
2秒前
3秒前
genomed应助一叶扁舟采纳,获得10
3秒前
LordRedScience完成签到,获得积分10
3秒前
siyue完成签到 ,获得积分10
3秒前
4秒前
4秒前
4秒前
SYLH应助可耐的思远采纳,获得10
6秒前
叫滚滚发布了新的文献求助10
6秒前
蓝色风筝完成签到,获得积分20
6秒前
伯赏睿渊完成签到,获得积分10
7秒前
初小花完成签到,获得积分10
7秒前
123123完成签到,获得积分10
7秒前
joybee完成签到,获得积分0
7秒前
XinyuLu完成签到,获得积分10
7秒前
8秒前
小蘑菇应助爱科研的小许采纳,获得10
8秒前
fanlin完成签到,获得积分0
8秒前
蓝色风筝发布了新的文献求助10
10秒前
10秒前
轻松博超发布了新的文献求助10
10秒前
an完成签到,获得积分20
10秒前
清欢完成签到,获得积分20
10秒前
10秒前
10秒前
阿九完成签到,获得积分10
10秒前
美好的慕青完成签到,获得积分10
12秒前
小龙发布了新的文献求助10
12秒前
rong发布了新的文献求助10
12秒前
JamesPei应助丁一采纳,获得10
12秒前
孝艺完成签到 ,获得积分10
13秒前
小巧的雅旋完成签到,获得积分10
13秒前
13秒前
清欢发布了新的文献求助10
13秒前
梅赛德斯完成签到,获得积分10
13秒前
高分求助中
All the Birds of the World 4000
Production Logging: Theoretical and Interpretive Elements 3000
Les Mantodea de Guyane Insecta, Polyneoptera 2000
Machine Learning Methods in Geoscience 1000
Resilience of a Nation: A History of the Military in Rwanda 888
Essentials of Performance Analysis in Sport 500
Measure Mean Linear Intercept 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3729540
求助须知:如何正确求助?哪些是违规求助? 3274597
关于积分的说明 9987208
捐赠科研通 2989862
什么是DOI,文献DOI怎么找? 1640784
邀请新用户注册赠送积分活动 779381
科研通“疑难数据库(出版商)”最低求助积分说明 748198