Pre-trained models have been widely adopted in deep learning development, benefiting the fine-tuning of downstream user-specific tasks with enormous computation saving. However, backdoor attacks pose severe security threat to the subsequent models built upon compromised pre-trained models, which call for effective countermeasures to mitigate the backdoor threat before deploying the victim models to safety-critical applications. This paper proposesPurifier : a novel backdoor mitigation framework for pre-trained models via suppressing anomaly activation.Purifier is motivated by the observation that, for backdoor triggers, anomaly activation patterns exist across different perspectives (e.g., channel-wise, cube-wise, and feature-wise), featuring different degrees of granularity. More importantly, choosing to suppress at the right granularity is vital to robustness and accuracy. To this end,Purifier is capable of defending against diverse types of backdoor triggers without any prior knowledge of the backdoor attacks, meanwhile featuring a convenient and flexible characteristic during deployment, i.e., plug-and-play-able. The extensive experimental results show, against a series of state-of-the-art mainstream attacks, thatPurifier performs better in terms of both defense effectiveness and model inference accuracy on clean examples than the state-of-the-art methods. Our code and Appendix can be found in \urlgithub.com/RUIYUN-ML/Purifier.