计算机科学
情态动词
RGB颜色模型
人工智能
跟踪(教育)
计算机视觉
下游(制造业)
眼动
事件(粒子物理)
机器学习
工程类
物理
心理学
化学
高分子化学
量子力学
运营管理
教育学
作者
Jiawen Zhu,Simiao Lai,Xin Chen,Dong Wang,Huchuan Lu
标识
DOI:10.1109/cvpr52729.2023.00918
摘要
Visible-modal object tracking gives rise to a series of downstream multi-modal tracking tributaries. To inherit the powerful representations of the foundation model, a natural modus operandi for multi-modal tracking is full fine-tuning on the RGB-based parameters. Albeit effective, this manner is not optimal due to the scarcity of downstream data and poor transferability, etc. In this paper, inspired by the recent success of the prompt learning in language models, we develop Visual Prompt multi-modal Tracking (ViPT), which learns the modal-relevant prompts to adapt the frozen pre-trained foundation model to various downstream multi-modal tracking tasks. ViPT finds a better way to stimulate the knowledge of the RGB-based model that is pre-trained at scale, meanwhile only introducing a few trainable parameters (less than 1% of model parameters). ViPT outperforms the full fine-tuning paradigm on multiple downstream tracking tasks including RGB+Depth, RGB+Thermal, and RGB+Event tracking. Extensive experiments show the potential of visual prompt learning for multi-modal tracking, and ViPT can achieve state-of-the-art performance while satisfying parameter efficiency. Code and models are available at https://github.com/jiawen-zhu/ViPT.
科研通智能强力驱动
Strongly Powered by AbleSci AI