计算机视觉
人工智能
点云
计算机科学
融合
视频跟踪
跟踪(教育)
对象(语法)
心理学
教育学
哲学
语言学
作者
Zhipeng Luo,Changqing Zhou,Liang Pan,Gongjie Zhang,Tianrui Liu,Yueru Luo,Haiyu Zhao,Ziwei Liu,Shijian Lu
标识
DOI:10.1109/tpami.2024.3373693
摘要
With the prevalent use of LiDAR sensors in autonomous driving, 3D point cloud object tracking has received increasing attention. In a point cloud sequence, 3D object tracking aims to predict the location and orientation of an object in consecutive frames. Motivated by the success of transformers, we propose P oint T racking TR ansformer (PTTR), which efficiently predicts high-quality 3D tracking results in a coarse-to-fine manner with the help of transformer operations. PTTR consists of three novel designs. 1) Instead of random sampling, we design Relation-Aware Sampling to preserve relevant points to the given template during subsampling. 2) We propose a Point Relation Transformer for effective feature aggregation and feature matching between the template and search region. 3) Based on the coarse tracking results, we employ a novel Prediction Refinement Module to obtain the final refined prediction through local feature pooling. In addition, motivated by the favorable properties of the Bird's-Eye View (BEV) of point clouds in capturing object motion, we further design a more advanced framework named PTTR++, which incorporates both the point-wise view and BEV representation to exploit their complementary effect in generating high-quality tracking results. PTTR++ substantially boosts the tracking performance on top of PTTR with low computational overhead. Extensive experiments over multiple datasets show that our proposed approaches achieve superior 3D tracking accuracy and efficiency. Code will be available at https://github.com/Jasonkks/PTTR
科研通智能强力驱动
Strongly Powered by AbleSci AI