离散化
计算机科学
对象(语法)
蒸馏
面向对象程序设计
域代数上的
人工智能
理论计算机科学
数学
程序设计语言
纯数学
数学分析
色谱法
化学
作者
Chen Cheng,Huiyan Ding,Minglei Duan
标识
DOI:10.1016/j.dsp.2024.104512
摘要
In recent years, lightweight object detection networks have been increasingly applied to remote sensing platforms due to their fast inference speed and flexible deployment advantages. Knowledge distillation methods have been widely used to reduce the performance gap between large and small models, and many studies have combined knowledge distillation with object detection. However, existing knowledge distillation methods often overlook the transfer of localization knowledge. Therefore, this paper proposes a method called Discretized Position Knowledge Distillation (DPKD) to improve the use of knowledge distillation in object detection. Specifically, the DPKD method incorporates a Discretization Algorithm Module (DAM), which leverages both general probability distribution and cross-Gaussian distribution to transfer high-quality bounding box position and pose information. Additionally, the Position Knowledge Distillation (PKD) method splits the target and non-target bounding boxes to form the distillation loss function, addressing the issue of missing background knowledge transfer during the distillation process. To further enhance the learning of high-quality bounding boxes, a Region Weighting Module (RWM) based on EIoU is introduced in DPKD, assigning weights to the various bounding boxes in the teacher's output. The effectiveness of DPKD in the field of remote sensing image object detection in multi-modal scenarios was verified through multi-modal training on the publicly available DOTA and HRSID datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI