Histopathological tissue classification is a fundamental task in computational pathology. Deep learning (DL)-based models have achieved superior performance but centralized training suffers from the privacy leakage problem. Federated learning (FL) can safeguard privacy by keeping training samples locally, while existing FL-based frameworks require a large number of well-annotated training samples and numerous rounds of communication which hinder their viability in real-world clinical scenarios. In this article, we propose a lightweight and universal FL framework, named federated deep-broad learning (FedDBL), to achieve superior classification performance with limited training samples and only one-round communication. By simply integrating a pretrained DL feature extractor, a fast and lightweight broad learning inference system with a classical federated aggregation approach, FedDBL can dramatically reduce data dependency and improve communication efficiency. Five-fold cross-validation demonstrates that FedDBL greatly outperforms the competitors with only one-round communication and limited training samples, while it even achieves comparable performance with the ones under multiple-round communications. Furthermore, due to the lightweight design and one-round communication, FedDBL reduces the communication burden from 4.6 GB to only 138.4 KB per client using the ResNet-50 backbone at 50-round training. Extensive experiments also show the scalability of FedDBL on model generalization to the unseen dataset, various client numbers, model personalization and other image modalities. Since no data or deep model sharing across different clients, the privacy issue is well-solved and the model security is guaranteed with no model inversion attack risk. Code is available at https://github.com/tianpeng-deng/FedDBL.