Federated learning (FL) is vulnerable to backdoor attacks, which aim to cause the misclassification on samples with a specific backdoor. Most existing algorithms are restricted by some conditions, such as the data distribution across the joint clients, the number of attackers and some auxiliary information, thereby being limited in the practical FL. In this paper, we propose RoPE containing three parts: using principal component analysis to extract the important features of model gradients; leveraging expectation–maximization to separate malicious clients from benign ones in accordance to the important features; removing the potential malicious gradients within the selected cluster with Isolated Forest. RoPE requires no restricted assumptions during the training process. We evaluate the performance of RoPE on three image classification tasks under non-independent and identically distributed scenario (non-iid) against centralized backdoor attacks with various ratios of attackers and distributed backdoor attacks, respectively. We also evaluate the performance of RoPE against other backdoor attack scenarios, including independent and identically distributed (iid) scheme, elaborately designed attack schemes. The results show that RoPE can defend against these backdoor attacks and outperform the existing algorithms. In addition, we also explore the impact of different numbers of features on RoPE's performance and conduct ablation experiments.