变压器
计算机科学
眼动
视觉注意
人工智能
计算机视觉
心理学
电气工程
工程类
神经科学
认知
电压
作者
Wuwei Wang,Ke Zhang,Yu Su,Jingyu Wang,Qi Wang
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2023-06-20
卷期号:35 (11): 15156-15169
被引量:8
标识
DOI:10.1109/tnnls.2023.3282905
摘要
In the past few years, visual tracking methods with convolution neural networks (CNNs) have gained great popularity and success. However, the convolution operation of CNNs struggles to relate spatially distant information, which limits the discriminative power of trackers. Very recently, several Transformer-assisted tracking approaches have emerged to alleviate the above issue by combining CNNs with Transformers to enhance the feature representation. In contrast to the methods mentioned above, this article explores a pure Transformer-based model with a novel semi-Siamese architecture. Both the time–space self-attention module used to construct the feature extraction backbone and the cross-attention discriminator used to estimate the response map solely leverage attention without convolution. Inspired by the recent vision transformers (ViTs), we propose the multistage alternating time–space Transformers (ATSTs) to learn robust feature representation. Specifically, temporal and spatial tokens at each stage are alternately extracted and encoded by separate Transformers. Subsequently, a cross-attention discriminator is proposed to directly generate response maps of the search region without additional prediction heads or correlation filters. Experimental results show that our ATST-based model attains favorable results against state-of-the-art convolutional trackers. Moreover, it shows comparable performance with recent "CNN $+$ Transformer" trackers on various benchmarks while our ATST requires significantly less training data.
科研通智能强力驱动
Strongly Powered by AbleSci AI