计算机科学
答疑
任务(项目管理)
人工智能
光学(聚焦)
情报检索
编码(集合论)
图像(数学)
自然语言处理
词(群论)
质量(理念)
树(集合论)
机器学习
语言学
数学分析
哲学
物理
数学
管理
集合(抽象数据类型)
认识论
光学
经济
程序设计语言
作者
Haiwei Pan,Shuning He,Kejia Zhang,Bo Qu,Chunling Chen,K. Shi
标识
DOI:10.1016/j.knosys.2022.109763
摘要
Medical Visual Question Answering (VQA) is a multimodal task to answer clinical questions about medical images. Existing methods have achieved good performance, but most medical VQA models focus on visual contents while ignoring the influence of textual contents. To address this issue, this paper proposes an Attention-based Multimodal Alignment Model (AMAM) for medical VQA, aiming for an alignment of text-based and image-based attention to enrich the textual features. First, we develop an Image-to-Question (I2Q) attention and a Word-to-Question (W2Q) attention to model the relations of both visual and textual contents to the question. Second, we design a composite loss composed of a classification loss and an Image–Question Complementary (IQC) loss. The IQC loss concentrates on aligning the importance of the questions learned from visual and textual features to emphasize meaningful words in questions and improve the quality of predicted answers. Benefiting from the attention mechanisms and the composite loss, AMAM obtains rich semantic textual information and accurate answers. Finally, due to some data errors and missing labels on the VQA-RAD dataset, we further constructed an enhanced dataset, VQA-RADPh, to raise data quality. Experimental results on public datasets show better performance of AMAM compared with the advanced methods. Our source code is available at: https://github.com/shuning-ai/AMAM/tree/master.
科研通智能强力驱动
Strongly Powered by AbleSci AI