Federated learning (FL) is a new distributed machine learning framework that emerged in recent years, which can protect the participants' data privacy to a certain extent without exchanging the participants' original data. Unfortunately, it can still be vulnerable to privacy attacks (e.g. membership inference attacks) or security attacks (e.g. model poisoning attacks), which can compromise participants' data or corrupt the trained model. Inspired by previous works, we propose a novel federated learning framework with data integrity auditing called NSPFL. First, NSPFL protects against privacy attacks by using a single mask to hide the participants' original data. Second, NSPFL constructs a novel reputation evaluation method to resist security attacks by measuring the distance between the previous and current aggregated gradients. Third, NSPFL utilizes the data stored on the cloud to prevent malicious Byzantine participants from denying behaviors. Finally, sufficient theoretical analysis proves the reliability of the scheme, and a large of experiments demonstrate the effectiveness of the NSPFL.