计算机科学
上下文图像分类
多标签分类
语义鸿沟
帕斯卡(单位)
人工智能
语义学(计算机科学)
模式识别(心理学)
变压器
可视化
特征提取
机器学习
图像(数学)
图像检索
电压
物理
量子力学
程序设计语言
作者
Xuelin Zhu,Jiuxin Cao,Jiawei Ge,Weijia Liu,Bo Liu
标识
DOI:10.1145/3503161.3548343
摘要
Multi-label image classification is a fundamental yet challenging task in computer vision that aims to identify multiple objects from a given image. Recent studies on this task mainly focus on learning cross-modal interactions between label semantics and high-level visual representations via an attention operation. However, these one-shot attention based approaches generally perform poorly in establishing accurate and robust alignments between vision and text due to the acknowledged semantic gap. In this paper, we propose a two-stream transformer (TSFormer) learning framework, in which the spatial stream focuses on extracting patch features with a global perception, while the semantic stream aims to learn vision-aware label semantics as well as their correlations via a multi-shot attention mechanism. Specifically, in each layer of TSFormer, a cross-modal attention module is developed to aggregate visual features from spatial stream into semantic stream and update label semantics via a residual connection. In this way, the semantic gap between two streams gradually narrows as the procedure progresses layer by layer, allowing the semantic stream to produce sophisticated visual representations for each label towards accurate label recognition. Extensive experiments on three visual benchmarks, including Pascal VOC 2007, Microsoft COCO and NUS-WIDE, consistently demonstrate that our proposed TSFormer achieves state-of-the-art performance on the multi-label image classification task.
科研通智能强力驱动
Strongly Powered by AbleSci AI