计算机科学
人工智能
计算机视觉
编码
编码(内存)
参考坐标系
视频跟踪
帧(网络)
视频处理
电信
生物化学
化学
基因
作者
Xiaotong Li,Licheng Jiao,Hao Zhu,Zhongjian Huang,Fang Liu,Lingling Li,Puhua Chen,Shuyuan Yang
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2023-08-21
卷期号:: 1-15
被引量:5
标识
DOI:10.1109/tnnls.2023.3302368
摘要
Recently, the excellent performance of transformer has attracted the attention of the visual community. Visual transformer models usually reshape images into sequence format and encode them sequentially. However, it is difficult to explicitly represent the relative relationship in distance and direction of visual data with typical 2-D spatial structures. Also, the temporal motion properties of consecutive frames are hardly exploited when it comes to dynamic video tasks like tracking. Therefore, we propose a novel dynamic polar spatio-temporal encoding for video scenes. We use spiral functions in polar space to fully exploit the spatial dependences of distance and direction in real scenes. We then design a dynamic relative encoding mode for continuous frames to capture the continuous spatio-temporal motion characteristics among video frames. Finally, we construct a complex-former framework with the proposed encoding applied to video-tracking tasks, where the complex fusion mode (CFM) realizes the effective fusion of scenes and positions for consecutive frames. The theoretical analysis demonstrates the feasibility and effectiveness of our proposed method. The experimental results on multiple datasets validate that our method can improve tracker performance in various video scenarios.
科研通智能强力驱动
Strongly Powered by AbleSci AI