Recently, federated learning (FL) as a new learning paradigm allows multi-party to collaboratively train a shared global model with privacy protection. However, vanilla FL running on heterogeneous mobile edge devices still faces three crucial challenges: communication efficiency, statistical heterogeneity, and system heterogeneity. To tackle them simultaneously, we devise FedPE, a communication-efficient and personalized federated learning framework, which allows each client to search for personalized optimal local subnets adaptive to system capacity in each round of FL. It consists of three core components: a) adaptive pruning-expanding controls model pruning or expanding according to the accuracy variations of local models, b) error compensation strategy promotes the pruned or expanded subnets to be Lottery Ticket Networks (LTNs), c) the fair aggregation rule aggregates local models with their real-time contributions as coefficients to boost the performance of the aggregated global model. The integration of the three components facilitates that only personalized optimal subnets with different footprints interact between the server and clients, which effectively reduces communication costs and enhances the robustness of FL to statistical and system heterogeneity. We also prove the convergence of FedPE and design an optimal hyperparameter searching (OHS) algorithm based on Pareto optimization to search for optimal hyperparameters for FedPE . Extensive experiments evaluated on five real-world datasets with IID or Non-IID distributions demonstrate that FedPE configured with found optimal hyperparameters achieves $1.86\times -121\times$ communication efficiency improvement with almost no accuracy degradation, presenting the best trade-off between model accuracy and communication cost.