计算机科学
蒸馏
辍学(神经网络)
稳健性(进化)
人工智能
人工神经网络
机器学习
一般化
简单(哲学)
深层神经网络
数学
数学分析
生物化学
化学
哲学
有机化学
认识论
基因
作者
Hyoje Lee,Yeachan Park,Hyun Wook Seo,Myungjoo Kang
标识
DOI:10.1016/j.cviu.2023.103720
摘要
To boost performance, deep neural networks require deeper or wider network structures that involve massive computational and memory costs. To alleviate this issue, the self-knowledge distillation method regularizes the model by distilling the internal knowledge of the model itself. Conventional self-knowledge distillation methods require additional trainable parameters or are dependent on the data. In this paper, we propose a simple and effective self-knowledge distillation method using a dropout (SD-Dropout). SD-Dropout distills the posterior distributions of multiple models through a dropout sampling. Our method does not require any additional trainable modules, does not rely on data, and requires only simple operations. Furthermore, this simple method can be easily combined with various self-knowledge distillation approaches. We provide a theoretical and experimental analysis of the effect of forward and reverse KL-divergences in our work. Extensive experiments on various vision tasks, i.e., image classification, object detection, and distribution shift, demonstrate that the proposed method can effectively improve the generalization of a single network. Further experiments show that the proposed method also improves calibration performance, adversarial robustness, and out-of-distribution detection ability.
科研通智能强力驱动
Strongly Powered by AbleSci AI