计算机科学
树(集合论)
培训(气象学)
人工智能
机器学习
数学
物理
数学分析
气象学
作者
Yuhui Zhang,Lutan Zhao,Cheng Che,Xiaofeng Wang,Dan Meng,Rui Hou
标识
DOI:10.1109/hpca57654.2024.00068
摘要
Federated tree-based models are popular in many real-world applications owing to their high accuracy and good interpretability. However, the classical synchronous method causes inefficient federated tree model training due to tree node dependencies. Inspired by speculative execution techniques in modern high-performance processors, this paper proposes SpecFL, a novel and efficient speculative federated learning system. Instead of simply waiting, SpecFL optimistically predicts the outcome of the prior tree node. By resolving tree node dependencies with a split point predictor, the training tasks of child tree nodes can be executed speculatively in advance via separate threads. This speculation enables cross-layer concurrent training, thus significantly reducing the waiting time. Furthermore, we propose a greedy speculation policy to exploit speculative training for deeper inter-layer concurrent training and an eager rollback mechanism for lossless model quality. We implement SpecFL and evaluate its efficiency in a real-world federated learning setting with six public datasets. The evaluation results demonstrate that SpecFL can be 2.08-3.33x and 2.14-3.44x faster than the state-of-the-art GBDT and RF implementations, respectively.
科研通智能强力驱动
Strongly Powered by AbleSci AI