Federated Learning (FL) has attracted increasing attention in medical imaging as an alternative to centralized data sharing that can leverage a large amount of data from different hospitals to improve the generalization of machine learning models. However, while FL can provide certain protection for patient privacy by retaining data in local client hospitals, data privacy could still be compromised when exchanging model parameters between local clients and servers. Meanwhile, although efficient training strategies are actively investigated, significant communication overhead remains a major challenge in FL as it requires substantial model updates between clients and servers. This becomes more prominent when more complex models, such as transformers, are introduced in medical imaging and when geographically distinct collaborators are involved in FL studies for global health problems. To this end, we proposed FeSEC, a secure and efficient FL framework, to address these two challenges. In particular, we firstly consider a sparse compression algorithm for efficient communication among the distributed hospitals, and then we ingrate the homomorphic encryption with differential privacy to secure data privacy during model exchanges. Experiments on the task of COVID-19 detection show the proposed FeSEC substantially improves the accuracy and privacy preservation of FL models compared to FedAvg with less than 10% of communication cost.