MNIST数据库
计算机科学
推论
稳健性(进化)
边缘设备
GSM演进的增强数据速率
人工智能
联合学习
机器学习
边缘计算
聚类分析
数据挖掘
分布式计算
人工神经网络
云计算
操作系统
生物化学
化学
基因
作者
Yu Qiao,Md. Shirajum Munir,Apurba Adhikary,Avi Deb Raha,Sang Hoon Hong,Choong Seon Hong
标识
DOI:10.1109/icoin56518.2023.10048999
摘要
Edge intelligence becomes the enabler to fulfill the privacy-preserving intelligent services and applications for next-generation networking. However, the heterogeneous data distribution of distributed edge clients often hinders the convergence rate and test accuracy. Federated Learning (FL), as a new paradigm for privacy-preserving distributed edge-artificial intelligence (edge-AI) that enables model training without the raw data of clients leaving their local sides. The differences in the data distribution of clients can easily lead to biased model inference results, especially when inferring through classifiers. In this paper, to enhance robustness against heterogeneity, a novel multiple-prototype based federated learning (MPFed) framework is proposed, in which clients communicate with server as typical federated training, but the model inference is performed by measuring the distance between the target prototype and multiple weighted prototypes. The weighted prototype of each class is calculated by executing the clustering algorithm (e.g., k-means) and weighted strategy at the client side before finishing the last federated iteration. The server aggregates these weighted prototypes collected from all clients, and then distributes to them for model inferences. Experimental analyses on multiple baseline datasets, such as MNIST, Fashion-MNIST, and CIFAR10 demonstrate our method has a higher test accuracy, at least 10%, and is relatively efficient in communication than baselines and state-of-the-art algorithms.
科研通智能强力驱动
Strongly Powered by AbleSci AI