计算机科学
代表性启发
联合学习
班级(哲学)
过程(计算)
集合(抽象数据类型)
分布式计算
机器学习
趋同(经济学)
人工智能
数据挖掘
心理学
社会心理学
经济
程序设计语言
经济增长
操作系统
作者
Anam Nawaz Khan,Atif Rizwan,Rashid Ahmad,Qazi Waqas Khan,Sunhwan Lim,Do‐Hyeun Kim
标识
DOI:10.1016/j.iot.2023.100890
摘要
Federated learning enables decentralized model training, but the distribution of data across devices presents significant challenges to global model convergence. Existing approaches risk losing the representativeness of local models after model aggregation, calling for a more efficient and robust solution. In this study, we address the model aggregation challenge in federated learning by focusing on improving the performance of global model with class imbalance and non-independent and identically distributed data. We aim to train a global model collaboratively that represents all participating nodes, promoting fairness and ensuring adequate representation of all classes in the model. We propose redistributing local model weights based on their precision-based contributions to each class to enhance the performance and communication efficiency of federated thermal comfort prediction. Our proposed method can assist in allocating more resources and attention to nodes with high precision for underrepresented classes, thereby improving the global model overall performance and fairness. Furthermore, our framework leverages the virtualization capability of digital twins to enable dynamic registration and participation of nodes in the federated learning process in real-time. The developed DT framework enables real-time monitoring and control of the decentralized training. Through evaluation on a real data-set, we demonstrate significant improvements in accuracy and communication efficiency compared to existing methods. Our evaluation shows that the proposed Class Precision-Weighted Aggregation technique Fed-CPWA outperforms Federated Averaging, with higher accuracy of 82.85% and lower communication costs by 31.64%. Our contribution provides a valuable step towards sustainable thermal comfort modeling and furthers the development of fair and robust federated learning techniques.
科研通智能强力驱动
Strongly Powered by AbleSci AI