答疑
计算机科学
变压器
人工智能
自然语言处理
预测能力
工程类
电气工程
电压
哲学
认识论
作者
Sajidul Islam Khandaker,Tahmina Talukdar,Prima Sarker,Md Humaion Kabir Mehedi,Ehsanur Rahman Rhythm,Annajiat Alim Rasel
标识
DOI:10.1109/iccit60459.2023.10441514
摘要
Visual Question Answering (VQA) is a field where computer vision and natural language processing intersect to develop systems capable of comprehending visual information and answering natural language questions. In visual question answering , algorithms interpret real-world images in response to questions expressed in human language. Our paper presents an extensive experimental study on Visual Question Answering (VQA) using a diverse set of multimodal transformers. The VQA task requires systems to comprehend both visual content and natural language questions. To address this challenge, we explore the performance of various pre-trained transformer architectures for encoding questions, including BERT, RoBERTa, and ALBERT, as well as image transformers, such as ViT, DeiT, and BEiT, for encoding images. Multimodal transformers' smooth fusion of visual and text data promotes cross-modal understanding and strengthens reasoning skills. On benchmark datasets like the Visual Question Answering (VQA) v2.0 dataset, we rigorously test and fine-tune these models to assess their effectiveness and compare their performance to more conventional VQA methods. The results show that multimodal transformers significantly outperform traditional techniques in terms of performance. Additionally, the models' attention maps give users insights into how they make decisions, improving interpretability and comprehension. Because of their adaptability, the tested transformer topologies have the potential to be used in a wide range of VQA applications, such as robotics, healthcare, and assistive technology. This study demonstrates the effectiveness and promise of multimodal transformers as a method for improving the effectiveness of visual question-answering systems.
科研通智能强力驱动
Strongly Powered by AbleSci AI