计算机科学
过度拟合
人工智能
变压器
机器翻译
稳健性(进化)
直觉
模式
自然语言处理
机器学习
人工神经网络
心理学
认知科学
量子力学
基因
物理
生物化学
社会学
电压
化学
社会科学
作者
Alexander Shirnin,Nikita Andreev,Sofia Potapova,Ekaterina Artemova
标识
DOI:10.1109/taslp.2024.3399061
摘要
We present an approach to evaluate the robustness of pre-trained vision and language (V&L) models to noise in input data. Given a source image/text, we perturb it using standard computer vision (CV) / natural language processing (NLP) techniques and feed it to a V&L model. To track performance changes, we explore the problem of visual questions answering (VQA). Overall, we utilize 5 image and 9 text perturbation techniques and probe three Transformer-based V&L models followed by a broad analysis of their behavior and a detailed comparison. We discovered several key findings regarding the performance of the models in relation to the impact of various perturbations. These discrepancies in performance can be attributed to differences in their architectures and learning objectives. Last, but not least, we perform an empirical study to assess whether the attention mechanism of V&L Transformers learns to align modalities. We hypothesize, that attention weights for related objects and words, should be on average higher than for random object/word pairs. However, our study shows that, unlike is believed for machine translation models, V&L models do not learn alignment at all or exhibit less evidence to do so. This may support the intuition that V&L Transformers overfit to either of the modalities.
科研通智能强力驱动
Strongly Powered by AbleSci AI