Researchers strive to designing artificial intelligence (AI) models that can fully utilize the potentials of data while protecting privacy. Federated learning is a promising solution because it utilizes data but shields it from those who do not own them. However, assessing data quality becomes a challenge in federated learning. We propose a data quality assessment method, Federated Data Quality Assessment (FedDQA), and compare it with traditional federated learning methods. FedDQA identifies low-quality data from participants and reduces their influence on the global model. We integrate data quality regularization strategies at the instance, feature, and participant levels into federate learning model. In various data poisoning settings, FedDQA outperforms existing federated learning methods in prediction performance and the accuracy in detecting low-quality data.