计算机科学
点云
人工智能
像素
计算机视觉
分割
推论
点(几何)
投影(关系代数)
边距(机器学习)
特征(语言学)
领域(数学分析)
模式识别(心理学)
机器学习
数学
算法
数学分析
语言学
哲学
几何学
作者
Ziyi Wang,Yongming Rao,Xumin Yu,Jie Zhou,Jiwen Lu
标识
DOI:10.1109/tpami.2024.3354961
摘要
Nowadays, pre-training big models on large-scale datasets has achieved great success and dominated many downstream tasks in natural language processing and 2D vision, while pre-training in 3D vision is still under development. In this paper, we provide a new perspective of transferring the pre-trained knowledge from 2D domain to 3D domain with Point-to-Pixel Prompting in data space and Pixel-to-Point distillation in feature space, exploiting shared knowledge in images and point clouds that display the same visual world. Following the principle of prompting engineering, Point-to-Pixel Prompting transforms point clouds into colorful images with geometry-preserved projection and geometry-aware coloring. Then the pre-trained image models can be directly implemented for point cloud tasks without structural changes or weight modifications. With projection correspondence in feature space, Pixel-to-Point distillation further regards pre-trained image models as the teacher model and distills pre-trained 2D knowledge to student point cloud models, remarkably enhancing inference efficiency and model capacity for point cloud analysis. We conduct extensive experiments in both object classification and scene segmentation under various settings to demonstrate the superiority of our method. In object classification, we reveal the important scale-up trend of Point-to-Pixel Prompting and attain 90.3% accuracy on ScanObjectNN dataset, surpassing previous literature by a large margin. In scene-level semantic segmentation, our method outperforms traditional 3D analysis approaches and shows competitive capacity in dense prediction tasks. Code is available at https://github.com/wangzy22/P2P .
科研通智能强力驱动
Strongly Powered by AbleSci AI