计算机科学
人工智能
特征(语言学)
规范(哲学)
模式识别(心理学)
相似性(几何)
机器学习
对象(语法)
对比度(视觉)
像素
架空(工程)
图像(数学)
哲学
语言学
政治学
法学
操作系统
作者
Philip de Rijk,Lukas Schneider,Marius Cordts,Dariu M. Gavrila
出处
期刊:Cornell University - arXiv
日期:2022-01-01
被引量:8
标识
DOI:10.48550/arxiv.2211.13133
摘要
Knowledge Distillation (KD) is a well-known training paradigm in deep neural networks where knowledge acquired by a large teacher model is transferred to a small student. KD has proven to be an effective technique to significantly improve the student's performance for various tasks including object detection. As such, KD techniques mostly rely on guidance at the intermediate feature level, which is typically implemented by minimizing an lp-norm distance between teacher and student activations during training. In this paper, we propose a replacement for the pixel-wise independent lp-norm based on the structural similarity (SSIM). By taking into account additional contrast and structural cues, feature importance, correlation and spatial dependence in the feature space are considered in the loss formulation. Extensive experiments on MSCOCO demonstrate the effectiveness of our method across different training schemes and architectures. Our method adds only little computational overhead, is straightforward to implement and at the same time it significantly outperforms the standard lp-norms. Moreover, more complex state-of-the-art KD methods using attention-based sampling mechanisms are outperformed, including a +3.5 AP gain using a Faster R-CNN R-50 compared to a vanilla model.
科研通智能强力驱动
Strongly Powered by AbleSci AI