过度拟合
计算机科学
人工智能
分割
背景(考古学)
机器学习
模式识别(心理学)
蒸馏
异常检测
相似性(几何)
人工神经网络
图像(数学)
生物
古生物学
有机化学
化学
作者
Yunkang Cao,Qian Wan,Weiming Shen,Liang Gao
标识
DOI:10.1016/j.knosys.2022.108846
摘要
Unsupervised anomaly segmentation methods based on knowledge distillation have recently been developed and have shown superior segmentation performance. However, little attention has been paid to the overfitting problem caused by the inconsistency between the capacity of a neural network and the amount of knowledge in this scheme. This study proposes a novel method called informative knowledge distillation (IKD) to address the overfitting problem by distilling informative knowledge and offering a strong supervisory signal. Technically, a novel context similarity loss method is proposed to capture context information from normal data manifolds. In addition, a novel adaptive hard sample mining method is proposed to encourage more attention on hard samples with valuable information. With IKD, informative knowledge can be distilled such that the overfitting problem can be effectively mitigated, and the performance can be further increased. The proposed method achieved better results on several categories of the well-known MVTec AD dataset than state-of-the-art methods in terms of AU-ROC, achieving 97.81% overall in 15 categories. Extensive experiments on ablation have also been conducted to demonstrate the effectiveness of IKD in alleviating the overfitting problem.
科研通智能强力驱动
Strongly Powered by AbleSci AI