计算机科学
人工智能
判别式
模式识别(心理学)
卷积神经网络
安全性令牌
语义学(计算机科学)
对象(语法)
依赖关系(UML)
特征提取
特征(语言学)
计算机视觉
自然语言处理
计算机安全
程序设计语言
语言学
哲学
作者
Wei Gao,Fang Wan,Xingjia Pan,Zhiliang Peng,Qi Tian,Zhenjun Han,Bolei Zhou,Qixiang Ye
标识
DOI:10.1109/iccv48922.2021.00288
摘要
Weakly supervised object localization (WSOL) is a challenging problem when given image category labels but requires to learn object localization models. Optimizing a convolutional neural network (CNN) for classification tends to activate local discriminative regions while ignoring complete object extent, causing the partial activation issue. In this paper, we argue that partial activation is caused by the intrinsic characteristics of CNN, where the convolution operations produce local receptive fields and experience difficulty to capture long-range feature dependency among pixels. We introduce the token semantic coupled attention map (TS-CAM) to take full advantage of the self-attention mechanism in visual transformer for long-range dependency extraction. TS-CAM first splits an image into a sequence of patch tokens for spatial embedding, which produce attention maps of long-range visual dependency to avoid partial activation. TS-CAM then re-allocates category-related semantics for patch tokens, enabling each of them to be aware of object categories. TS-CAM finally couples the patch tokens with the semantic-agnostic attention map to achieve semantic-aware localization. Experiments on the ILSVRC/CUB-200-2011 datasets show that TS-CAM outperforms its CNN-CAM counterparts by 7.1%/27.1% for WSOL, achieving state-of-the-art performance. Code is available at https://github.com/vasgaowei/TS-CAM
科研通智能强力驱动
Strongly Powered by AbleSci AI