Deep learning (DL) based anomaly detection has shown great promise in the field of security due to its remarkable performance in various tasks. However, the issue of poor interpretability in DL models has significantly impeded their deployment in practical security applications. Despite the progress made in existing studies on DL explanations, the majority of them focus on providing local explanations for individual samples, neglecting the global understanding of the model knowledge. Furthermore, most explanations for supervised models fail to apply to anomaly detection due to their different learning mechanisms.