自编码
转化(遗传学)
人工智能
异常检测
计算机科学
深度学习
模式识别(心理学)
代表(政治)
异常(物理)
编码器
图像(数学)
无监督学习
计算机视觉
政治
生物化学
基因
操作系统
物理
凝聚态物理
政治学
化学
法学
作者
Chao Huang,Zehua Yang,Jiangtao Wen,Yong Xu,Qiuping Jiang,Jian Yang,Yaowei Wang
出处
期刊:IEEE transactions on cybernetics
[Institute of Electrical and Electronics Engineers]
日期:2022-12-01
卷期号:52 (12): 13834-13847
被引量:24
标识
DOI:10.1109/tcyb.2021.3127716
摘要
Deep autoencoder (AE) has demonstrated promising performances in visual anomaly detection (VAD). Learning normal patterns on normal data, deep AE is expected to yield larger reconstruction errors for anomalous samples, which is utilized as the criterion for detecting anomalies. However, this hypothesis cannot be always tenable since the deep AE usually captures the low-level shared features between normal and abnormal data, which leads to similar reconstruction errors for them. To tackle this problem, we propose a self-supervised representation-augmented deep AE for unsupervised VAD, which can enlarge the gap of anomaly scores between normal and abnormal samples by introducing autoencoding transformation (AT). Essentially, AT is introduced to facilitate AE to learn the high-level visual semantic features of normal images by introducing a self-supervision task (transformation reconstruction). In particular, our model inputs the original and transformed images into the encoder for obtaining latent representations; afterward, they are fed to the decoder for reconstructing both the original image and applied transformation. In this way, our model can utilize both image and transformation reconstruction errors to detect anomaly. Extensive experiments indicate that the proposed method outperforms other state-of-the-art methods, which demonstrates the validity and advancement of our model.
科研通智能强力驱动
Strongly Powered by AbleSci AI