计算机科学
光学字符识别
答疑
人工智能
杠杆(统计)
稳健性(进化)
自然语言处理
阅读(过程)
端到端原则
情报检索
机器学习
语音识别
图像(数学)
生物化学
化学
政治学
法学
基因
作者
Gangyan Zeng,Yuan Zhang,Yu Zhou,Xiaomeng Yang,Ning Jiang,Guoqing Zhao,Weiping Wang,Xu-Cheng Yin
标识
DOI:10.1016/j.patcog.2023.109337
摘要
Text-based visual question answering (TextVQA), which answers a visual question by considering both visual contents and scene texts, has attracted increasing attention recently. Most existing methods employ an optical character recognition (OCR) module as a pre-processor to read texts, then combine it with a visual question answering (VQA) framework. However, inaccurate OCR results may lead to cumulative error propagation, and the correlation between text reading and text-based reasoning is not fully exploited. In this work, we integrate OCR into the flow of TextVQA, targeting the mutual reinforcement of OCR and VQA tasks. Specifically, a visually enhanced text embedding module is proposed to predict semantic features from the visual information of texts, by which texts can be reasonably understood even without accurate recognition. Further, two elaborate schemes are developed to leverage contextual information in VQA to modify OCR results. The first scheme is a reading modification module that adaptively selects the answer results according to the contexts. Second, we propose an efficient end-to-end text reading and reasoning network, where the downstream VQA signal contributes to the optimization of text reading. Extensive experiments show that our method outperforms existing alternatives in terms of accuracy and robustness, whether ground truth OCR annotations are used or not.
科研通智能强力驱动
Strongly Powered by AbleSci AI