计算机科学
遗忘
领域(数学)
人工智能
目标检测
机器学习
任务(项目管理)
特征(语言学)
点云
蒸馏
对象(语法)
点(几何)
模式识别(心理学)
系统工程
工程类
几何学
化学
有机化学
纯数学
数学
语言学
哲学
作者
Zhihui Li,Pengfei Xu,Xiaojun Chang,Luyao Yang,Yuanyuan Zhang,Lina Yao,Xiaojiang Chen
标识
DOI:10.1109/tpami.2023.3257546
摘要
Object detection (OD) is a crucial computer vision task that has seen the development of many algorithms and models over the years. While the performance of current OD models has improved, they have also become more complex, making them impractical for industry applications due to their large parameter size. To tackle this problem, knowledge distillation (KD) technology was proposed in 2015 for image classification and subsequently extended to other visual tasks due to its ability to transfer knowledge learned by complex teacher models to lightweight student models. This paper presents a comprehensive survey of KD-based OD models developed in recent years, with the aim of providing researchers with an overview of recent progress in the field. We conduct an in-depth analysis of existing works, highlighting their advantages and limitations, and explore future research directions to inspire the design of models for related tasks. We summarize the basic principles of designing KD-based OD models, describe related KD-based OD tasks, including performance improvements for lightweight models, catastrophic forgetting in incremental OD, small object detection, and weakly/semi-supervised OD. We also analyze novel distillation techniques, i.e. different types of distillation loss, feature interaction between teacher and student models, etc. Additionally, we provide an overview of the extended applications of KD-based OD models on specific datasets, such as remote sensing images and 3D point cloud datasets. We compare and analyze the performance of different models on several common datasets and discuss promising directions for solving specific OD problems.
科研通智能强力驱动
Strongly Powered by AbleSci AI