计算机科学
蒸馏
刮擦
人工智能
机器学习
人工神经网络
上下文图像分类
编码
特征(语言学)
知识抽取
模式识别(心理学)
图像(数学)
生物化学
化学
语言学
哲学
有机化学
基因
操作系统
作者
Quanshi Zhang,Xu Cheng,Yilan Chen,Zhefan Rao
标识
DOI:10.1109/tpami.2022.3200344
摘要
Compared to traditional learning from scratch, knowledge distillation sometimes makes the DNN achieve superior performance. In this paper, we provide a new perspective to explain the success of knowledge distillation based on the information theory, i.e., quantifying knowledge points encoded in intermediate layers of a DNN for classification. To this end, we consider the signal processing in a DNN as a layer-wise process of discarding information. A knowledge point is referred to as an input unit, the information of which is discarded much less than that of other input units. Thus, we propose three hypotheses for knowledge distillation based on the quantification of knowledge points. 1. The DNN learning from knowledge distillation encodes more knowledge points than the DNN learning from scratch. 2. Knowledge distillation makes the DNN more likely to learn different knowledge points simultaneously. In comparison, the DNN learning from scratch tends to encode various knowledge points sequentially. 3. The DNN learning from knowledge distillation is often more stably optimized than the DNN learning from scratch. To verify the above hypotheses, we design three types of metrics with annotations of foreground objects to analyze feature representations of the DNN, i.e., the quantity and the quality of knowledge points, the learning speed of different knowledge points, and the stability of optimization directions. In experiments, we diagnosed various DNNs on different classification tasks, including image classification, 3D point cloud classification, binary sentiment classification, and question answering, which verified the above hypotheses.
科研通智能强力驱动
Strongly Powered by AbleSci AI