计算机科学
变压器
图像(数学)
语言理解
基线(sea)
人工智能
自然语言处理
简单(哲学)
语言模型
计算机视觉
工程类
哲学
地质学
电压
电气工程
认识论
海洋学
作者
Liunian Harold Li,Mark Yatskar,Dong Yin,Cho‐Jui Hsieh,Kai-Wei Chang
出处
期刊:Cornell University - arXiv
日期:2019-01-01
被引量:1067
标识
DOI:10.48550/arxiv.1908.03557
摘要
We propose VisualBERT, a simple and flexible framework for modeling a broad range of vision-and-language tasks. VisualBERT consists of a stack of Transformer layers that implicitly align elements of an input text and regions in an associated input image with self-attention. We further propose two visually-grounded language model objectives for pre-training VisualBERT on image caption data. Experiments on four vision-and-language tasks including VQA, VCR, NLVR2, and Flickr30K show that VisualBERT outperforms or rivals with state-of-the-art models while being significantly simpler. Further analysis demonstrates that VisualBERT can ground elements of language to image regions without any explicit supervision and is even sensitive to syntactic relationships, tracking, for example, associations between verbs and image regions corresponding to their arguments.
科研通智能强力驱动
Strongly Powered by AbleSci AI