自编码
异常检测
人工智能
计算机科学
理论(学习稳定性)
模式识别(心理学)
特征(语言学)
失真(音乐)
深度学习
异常(物理)
数据挖掘
机器学习
无监督学习
物理
语言学
哲学
计算机网络
凝聚态物理
放大器
带宽(计算)
作者
Zhen Cheng,Siwei Wang,Pei Zhang,Siqi Wang,Xinwang Liu,En Zhu
摘要
Deep autoencoder-based methods are the majority of deep anomaly detection. An autoencoder learning on training data is assumed to produce higher reconstruction error for the anomalous samples than the normal samples and thus can distinguish anomalies from normal data. However, this assumption does not always hold in practice, especially in unsupervised anomaly detection, where the training data is anomaly contaminated. We observe that the autoencoder generalizes so well on the training data that it can reconstruct both the normal data and the anomalous data well, leading to poor anomaly detection performance. Besides, we find that anomaly detection performance is not stable when using reconstruction error as anomaly score, which is unacceptable in the unsupervised scenario. Because there are no labels to guide on selecting a proper model. To mitigate these drawbacks for autoencoder-based anomaly detection methods, we propose an Improved AutoEncoder for unsupervised Anomaly Detection (IAEAD). Specifically, we manipulate feature space to make normal data points closer using anomaly detection-based loss as guidance. Different from previous methods, by integrating the anomaly detection-based loss and autoencoder's reconstruction loss, IAEAD can jointly optimize for anomaly detection tasks and learn representations that preserve the local data structure to avoid feature distortion. Experiments on five image data sets empirically validate the effectiveness and stability of our method.
科研通智能强力驱动
Strongly Powered by AbleSci AI