The expanding availability of data in the financial sector promises to take the performance of machine learning models to a new level. However, given the high business value and confidentiality of credit data, the integration of datasets from multiple institutions for credit scoring modeling may result in privacy leakage. Consequently, in this paper, a horizontal federated learning paradigm is used to protect the local private data of each participant and collaborate to train a powerful shared global model. However, in the collaborative training process, heterogeneous data distributions can result in insufficient learning of the model. To overcome this issue, we propose the federated knowledge transfer (FedKT) method, which exploits the advantages of fine-tuning and knowledge distillation to effectively extract generic and specific knowledge from the early layers and outputs of the global model, respectively, thus improving the learning performance of the local models. We adopt five credit datasets and four performance measures to demonstrate the effectiveness of our proposed method. The experimental results show that the proposed method can securely utilize credit data from different parties to improve the performance of the credit scoring model. This also supports the potential of our proposed method for further applications in credit scoring.