With the increasing number of parameters in artificial intelligence (AI) models, distributed AI model training using numerous servers within data centers has become commonplace. However, traditional load balancing strategies in data center networks are not suitable for distributed training. Unbalanced network load can reduce network throughput and degrade application performance. Therefore, we propose a hybrid-granularity network load balancing strategy to address the issue, consisting of global path planing in advance, periodic flow scheduling and real-time packet rerouting. The simulation experiment results demonstrate that our method can reduce throughput imbalance and exposed communication time.