The intelligent fault diagnosis (IFD) method based on incremental learning (IL) can expand new fault categories without retraining the model, making it a research hotspot in the field of fault diagnosis. Currently, the combination of knowledge distillation (KD) and replay techniques has been widely used to alleviate catastrophic forgetting in IL. However, this method still has some limitations: first, the difference in data distribution of different incremental tasks may cause concept drift, hindering the model’s adaptation to new tasks; second, replay techniques may lead to an imbalance in the number of samples between new and old classes due to the limited storage of exemplar library, resulting in classifier learning bias. To address these limitations, this article proposes an incremental IFD method (IIFD-DDRF) based on dual-teacher knowledge distillation (DTKD) and dynamic residual fusion (DRF) (IIFD-DDRF). First, a DTKD strategy is proposed, which transmits new and old knowledge through two teacher models, helping the student model better adapt to new tasks while retaining old knowledge. Second, a DRF method is proposed to handle dynamic data imbalance. This method incorporates lightweight branch layers specific to the task, encoding old task knowledge and performing residual fusion to optimize classifier output. Additionally, a dynamic branch layer merging mechanism is adopted to effectively prevent excessive growth of the model. Finally, the effectiveness and advancement of this method are validated on three datasets: bearings and gearboxes.